Nov 23 01:40:58 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Nov 23 01:40:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 23 01:40:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Nov 23 01:40:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 23 01:40:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 23 01:40:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 23 01:40:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 23 01:40:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Nov 23 01:40:58 localhost kernel: signal: max sigframe size: 1776
Nov 23 01:40:58 localhost kernel: BIOS-provided physical RAM map:
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 23 01:40:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Nov 23 01:40:58 localhost kernel: NX (Execute Disable) protection: active
Nov 23 01:40:58 localhost kernel: SMBIOS 2.8 present.
Nov 23 01:40:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 23 01:40:58 localhost kernel: Hypervisor detected: KVM
Nov 23 01:40:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 23 01:40:58 localhost kernel: kvm-clock: using sched offset of 2944950681 cycles
Nov 23 01:40:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 23 01:40:58 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 23 01:40:58 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Nov 23 01:40:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 23 01:40:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 23 01:40:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 23 01:40:58 localhost kernel: Using GB pages for direct mapping
Nov 23 01:40:58 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Nov 23 01:40:58 localhost kernel: ACPI: Early table checksum verification disabled
Nov 23 01:40:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 23 01:40:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 01:40:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 01:40:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 01:40:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 23 01:40:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 01:40:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 23 01:40:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 23 01:40:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 23 01:40:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 23 01:40:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 23 01:40:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 23 01:40:58 localhost kernel: No NUMA configuration found
Nov 23 01:40:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Nov 23 01:40:58 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd3000-0x43fffdfff]
Nov 23 01:40:58 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Nov 23 01:40:58 localhost kernel: Zone ranges:
Nov 23 01:40:58 localhost kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 23 01:40:58 localhost kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 23 01:40:58 localhost kernel:  Normal   [mem 0x0000000100000000-0x000000043fffffff]
Nov 23 01:40:58 localhost kernel:  Device   empty
Nov 23 01:40:58 localhost kernel: Movable zone start for each node
Nov 23 01:40:58 localhost kernel: Early memory node ranges
Nov 23 01:40:58 localhost kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 23 01:40:58 localhost kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 23 01:40:58 localhost kernel:  node   0: [mem 0x0000000100000000-0x000000043fffffff]
Nov 23 01:40:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Nov 23 01:40:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 23 01:40:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 23 01:40:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 23 01:40:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 23 01:40:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 23 01:40:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 23 01:40:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 23 01:40:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 23 01:40:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 23 01:40:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 23 01:40:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 23 01:40:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 23 01:40:58 localhost kernel: TSC deadline timer available
Nov 23 01:40:58 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 23 01:40:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 23 01:40:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 23 01:40:58 localhost kernel: Booting paravirtualized kernel on KVM
Nov 23 01:40:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 23 01:40:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 23 01:40:58 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Nov 23 01:40:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 23 01:40:58 localhost kernel: Fallback order for Node 0: 0 
Nov 23 01:40:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Nov 23 01:40:58 localhost kernel: Policy zone: Normal
Nov 23 01:40:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Nov 23 01:40:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Nov 23 01:40:58 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Nov 23 01:40:58 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 23 01:40:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 23 01:40:58 localhost kernel: software IO TLB: area num 8.
Nov 23 01:40:58 localhost kernel: Memory: 2826284K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741268K reserved, 0K cma-reserved)
Nov 23 01:40:58 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Nov 23 01:40:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 23 01:40:58 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Nov 23 01:40:58 localhost kernel: ftrace: allocated 176 pages with 3 groups
Nov 23 01:40:58 localhost kernel: Dynamic Preempt: voluntary
Nov 23 01:40:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 23 01:40:58 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 23 01:40:58 localhost kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 23 01:40:58 localhost kernel: #011Rude variant of Tasks RCU enabled.
Nov 23 01:40:58 localhost kernel: #011Tracing variant of Tasks RCU enabled.
Nov 23 01:40:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 23 01:40:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 23 01:40:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 23 01:40:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 23 01:40:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 23 01:40:58 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Nov 23 01:40:58 localhost kernel: Console: colour VGA+ 80x25
Nov 23 01:40:58 localhost kernel: printk: console [tty0] enabled
Nov 23 01:40:58 localhost kernel: printk: console [ttyS0] enabled
Nov 23 01:40:58 localhost kernel: ACPI: Core revision 20211217
Nov 23 01:40:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 23 01:40:58 localhost kernel: x2apic enabled
Nov 23 01:40:58 localhost kernel: Switched APIC routing to physical x2apic.
Nov 23 01:40:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 23 01:40:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 23 01:40:58 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 23 01:40:58 localhost kernel: LSM: Security Framework initializing
Nov 23 01:40:58 localhost kernel: Yama: becoming mindful.
Nov 23 01:40:58 localhost kernel: SELinux:  Initializing.
Nov 23 01:40:58 localhost kernel: LSM support for eBPF active
Nov 23 01:40:58 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Nov 23 01:40:58 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Nov 23 01:40:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 23 01:40:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 23 01:40:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 23 01:40:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 23 01:40:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 23 01:40:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Nov 23 01:40:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Nov 23 01:40:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 23 01:40:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 23 01:40:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 23 01:40:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 23 01:40:58 localhost kernel: Freeing SMP alternatives memory: 36K
Nov 23 01:40:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 23 01:40:58 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Nov 23 01:40:58 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Nov 23 01:40:58 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Nov 23 01:40:58 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Nov 23 01:40:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 23 01:40:58 localhost kernel: ... version:                0
Nov 23 01:40:58 localhost kernel: ... bit width:              48
Nov 23 01:40:58 localhost kernel: ... generic registers:      6
Nov 23 01:40:58 localhost kernel: ... value mask:             0000ffffffffffff
Nov 23 01:40:58 localhost kernel: ... max period:             00007fffffffffff
Nov 23 01:40:58 localhost kernel: ... fixed-purpose events:   0
Nov 23 01:40:58 localhost kernel: ... event mask:             000000000000003f
Nov 23 01:40:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 23 01:40:58 localhost kernel: rcu: #011Max phase no-delay instances is 400.
Nov 23 01:40:58 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 23 01:40:58 localhost kernel: x86: Booting SMP configuration:
Nov 23 01:40:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 23 01:40:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 23 01:40:58 localhost kernel: smpboot: Max logical packages: 8
Nov 23 01:40:58 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 23 01:40:58 localhost kernel: node 0 deferred pages initialised in 25ms
Nov 23 01:40:58 localhost kernel: devtmpfs: initialized
Nov 23 01:40:58 localhost kernel: x86/mm: Memory block size: 128MB
Nov 23 01:40:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 23 01:40:58 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 23 01:40:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 23 01:40:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 23 01:40:58 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Nov 23 01:40:58 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 23 01:40:58 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 23 01:40:58 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 23 01:40:58 localhost kernel: audit: type=2000 audit(1763880056.717:1): state=initialized audit_enabled=0 res=1
Nov 23 01:40:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 23 01:40:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 23 01:40:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 23 01:40:58 localhost kernel: cpuidle: using governor menu
Nov 23 01:40:58 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Nov 23 01:40:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 23 01:40:58 localhost kernel: PCI: Using configuration type 1 for base access
Nov 23 01:40:58 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 23 01:40:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 23 01:40:58 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Nov 23 01:40:58 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Nov 23 01:40:58 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Nov 23 01:40:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Nov 23 01:40:58 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Nov 23 01:40:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 23 01:40:58 localhost kernel: ACPI: Interpreter enabled
Nov 23 01:40:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 23 01:40:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 23 01:40:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 23 01:40:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 23 01:40:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 23 01:40:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 23 01:40:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [3] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [4] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [5] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [6] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [7] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [8] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [9] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [10] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [11] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [12] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [13] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [14] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [15] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [16] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [17] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [18] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [19] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [20] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [21] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [22] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [23] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [24] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [25] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [26] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [27] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [28] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [29] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [30] registered
Nov 23 01:40:58 localhost kernel: acpiphp: Slot [31] registered
Nov 23 01:40:58 localhost kernel: PCI host bridge to bus 0000:00
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Nov 23 01:40:58 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Nov 23 01:40:58 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Nov 23 01:40:58 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Nov 23 01:40:58 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Nov 23 01:40:58 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Nov 23 01:40:58 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 23 01:40:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Nov 23 01:40:58 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Nov 23 01:40:58 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 23 01:40:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 23 01:40:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 23 01:40:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 23 01:40:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 23 01:40:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 23 01:40:58 localhost kernel: iommu: Default domain type: Translated 
Nov 23 01:40:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Nov 23 01:40:58 localhost kernel: SCSI subsystem initialized
Nov 23 01:40:58 localhost kernel: ACPI: bus type USB registered
Nov 23 01:40:58 localhost kernel: usbcore: registered new interface driver usbfs
Nov 23 01:40:58 localhost kernel: usbcore: registered new interface driver hub
Nov 23 01:40:58 localhost kernel: usbcore: registered new device driver usb
Nov 23 01:40:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 23 01:40:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 23 01:40:58 localhost kernel: PTP clock support registered
Nov 23 01:40:58 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 23 01:40:58 localhost kernel: NetLabel: Initializing
Nov 23 01:40:58 localhost kernel: NetLabel:  domain hash size = 128
Nov 23 01:40:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 23 01:40:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 23 01:40:58 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 23 01:40:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 23 01:40:58 localhost kernel: vgaarb: loaded
Nov 23 01:40:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 23 01:40:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 23 01:40:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 23 01:40:58 localhost kernel: pnp: PnP ACPI init
Nov 23 01:40:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 23 01:40:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 23 01:40:58 localhost kernel: NET: Registered PF_INET protocol family
Nov 23 01:40:58 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Nov 23 01:40:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Nov 23 01:40:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 23 01:40:58 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 23 01:40:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 23 01:40:58 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Nov 23 01:40:58 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Nov 23 01:40:58 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Nov 23 01:40:58 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Nov 23 01:40:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 23 01:40:58 localhost kernel: NET: Registered PF_XDP protocol family
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 23 01:40:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 23 01:40:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 23 01:40:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 23 01:40:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 27381 usecs
Nov 23 01:40:58 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 23 01:40:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 23 01:40:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 23 01:40:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 23 01:40:58 localhost kernel: ACPI: bus type thunderbolt registered
Nov 23 01:40:58 localhost kernel: Initialise system trusted keyrings
Nov 23 01:40:58 localhost kernel: Key type blacklist registered
Nov 23 01:40:58 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Nov 23 01:40:58 localhost kernel: zbud: loaded
Nov 23 01:40:58 localhost kernel: integrity: Platform Keyring initialized
Nov 23 01:40:58 localhost kernel: NET: Registered PF_ALG protocol family
Nov 23 01:40:58 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 23 01:40:58 localhost kernel: Key type asymmetric registered
Nov 23 01:40:58 localhost kernel: Asymmetric key parser 'x509' registered
Nov 23 01:40:58 localhost kernel: Running certificate verification selftests
Nov 23 01:40:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 23 01:40:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 23 01:40:58 localhost kernel: io scheduler mq-deadline registered
Nov 23 01:40:58 localhost kernel: io scheduler kyber registered
Nov 23 01:40:58 localhost kernel: io scheduler bfq registered
Nov 23 01:40:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 23 01:40:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 23 01:40:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 23 01:40:58 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 23 01:40:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 23 01:40:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 23 01:40:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 23 01:40:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 23 01:40:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 23 01:40:58 localhost kernel: Non-volatile memory driver v1.3
Nov 23 01:40:58 localhost kernel: rdac: device handler registered
Nov 23 01:40:58 localhost kernel: hp_sw: device handler registered
Nov 23 01:40:58 localhost kernel: emc: device handler registered
Nov 23 01:40:58 localhost kernel: alua: device handler registered
Nov 23 01:40:58 localhost kernel: libphy: Fixed MDIO Bus: probed
Nov 23 01:40:58 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Nov 23 01:40:58 localhost kernel: ehci-pci: EHCI PCI platform driver
Nov 23 01:40:58 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Nov 23 01:40:58 localhost kernel: ohci-pci: OHCI PCI platform driver
Nov 23 01:40:58 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Nov 23 01:40:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 23 01:40:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 23 01:40:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 23 01:40:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 23 01:40:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 23 01:40:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 23 01:40:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 23 01:40:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Nov 23 01:40:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 23 01:40:58 localhost kernel: hub 1-0:1.0: USB hub found
Nov 23 01:40:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 23 01:40:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 23 01:40:58 localhost kernel: usbserial: USB Serial support registered for generic
Nov 23 01:40:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 23 01:40:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 23 01:40:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 23 01:40:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 23 01:40:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 23 01:40:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 23 01:40:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 23 01:40:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-23T06:40:57 UTC (1763880057)
Nov 23 01:40:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 23 01:40:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 23 01:40:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 23 01:40:58 localhost kernel: usbcore: registered new interface driver usbhid
Nov 23 01:40:58 localhost kernel: usbhid: USB HID core driver
Nov 23 01:40:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 23 01:40:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 23 01:40:58 localhost kernel: Initializing XFRM netlink socket
Nov 23 01:40:58 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 23 01:40:58 localhost kernel: Segment Routing with IPv6
Nov 23 01:40:58 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 23 01:40:58 localhost kernel: mpls_gso: MPLS GSO support
Nov 23 01:40:58 localhost kernel: IPI shorthand broadcast: enabled
Nov 23 01:40:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 23 01:40:58 localhost kernel: AES CTR mode by8 optimization enabled
Nov 23 01:40:58 localhost kernel: sched_clock: Marking stable (769639719, 187293537)->(1084941857, -128008601)
Nov 23 01:40:58 localhost kernel: registered taskstats version 1
Nov 23 01:40:58 localhost kernel: Loading compiled-in X.509 certificates
Nov 23 01:40:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Nov 23 01:40:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 23 01:40:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 23 01:40:58 localhost kernel: zswap: loaded using pool lzo/zbud
Nov 23 01:40:58 localhost kernel: page_owner is disabled
Nov 23 01:40:58 localhost kernel: Key type big_key registered
Nov 23 01:40:58 localhost kernel: Freeing initrd memory: 74232K
Nov 23 01:40:58 localhost kernel: Key type encrypted registered
Nov 23 01:40:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 23 01:40:58 localhost kernel: Loading compiled-in module X.509 certificates
Nov 23 01:40:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Nov 23 01:40:58 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 23 01:40:58 localhost kernel: ima: No architecture policies found
Nov 23 01:40:58 localhost kernel: evm: Initialising EVM extended attributes:
Nov 23 01:40:58 localhost kernel: evm: security.selinux
Nov 23 01:40:58 localhost kernel: evm: security.SMACK64 (disabled)
Nov 23 01:40:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 23 01:40:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 23 01:40:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 23 01:40:58 localhost kernel: evm: security.apparmor (disabled)
Nov 23 01:40:58 localhost kernel: evm: security.ima
Nov 23 01:40:58 localhost kernel: evm: security.capability
Nov 23 01:40:58 localhost kernel: evm: HMAC attrs: 0x1
Nov 23 01:40:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 23 01:40:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 23 01:40:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 23 01:40:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 23 01:40:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 23 01:40:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 23 01:40:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 23 01:40:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 23 01:40:58 localhost kernel: Freeing unused decrypted memory: 2036K
Nov 23 01:40:58 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Nov 23 01:40:58 localhost kernel: Write protecting the kernel read-only data: 26624k
Nov 23 01:40:58 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Nov 23 01:40:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Nov 23 01:40:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 23 01:40:58 localhost kernel: Run /init as init process
Nov 23 01:40:58 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 01:40:58 localhost systemd[1]: Detected virtualization kvm.
Nov 23 01:40:58 localhost systemd[1]: Detected architecture x86-64.
Nov 23 01:40:58 localhost systemd[1]: Running in initrd.
Nov 23 01:40:58 localhost systemd[1]: No hostname configured, using default hostname.
Nov 23 01:40:58 localhost systemd[1]: Hostname set to <localhost>.
Nov 23 01:40:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 23 01:40:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 23 01:40:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 01:40:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 23 01:40:58 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 23 01:40:58 localhost systemd[1]: Reached target Local File Systems.
Nov 23 01:40:58 localhost systemd[1]: Reached target Path Units.
Nov 23 01:40:58 localhost systemd[1]: Reached target Slice Units.
Nov 23 01:40:58 localhost systemd[1]: Reached target Swaps.
Nov 23 01:40:58 localhost systemd[1]: Reached target Timer Units.
Nov 23 01:40:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 01:40:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 23 01:40:58 localhost systemd[1]: Listening on Journal Socket.
Nov 23 01:40:58 localhost systemd[1]: Listening on udev Control Socket.
Nov 23 01:40:58 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 23 01:40:58 localhost systemd[1]: Reached target Socket Units.
Nov 23 01:40:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 23 01:40:58 localhost systemd[1]: Starting Journal Service...
Nov 23 01:40:58 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 01:40:58 localhost systemd[1]: Starting Create System Users...
Nov 23 01:40:58 localhost systemd[1]: Starting Setup Virtual Console...
Nov 23 01:40:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 23 01:40:58 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 01:40:58 localhost systemd-journald[282]: Journal started
Nov 23 01:40:58 localhost systemd-journald[282]: Runtime Journal (/run/log/journal/94eff25b70704dc88cfe491426a98db3) is 8.0M, max 314.7M, 306.7M free.
Nov 23 01:40:58 localhost systemd-modules-load[283]: Module 'msr' is built in
Nov 23 01:40:58 localhost systemd[1]: Started Journal Service.
Nov 23 01:40:58 localhost systemd[1]: Finished Setup Virtual Console.
Nov 23 01:40:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 23 01:40:58 localhost systemd[1]: Starting dracut cmdline hook...
Nov 23 01:40:58 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 01:40:58 localhost systemd-sysusers[284]: Creating group 'sgx' with GID 997.
Nov 23 01:40:58 localhost systemd-sysusers[284]: Creating group 'users' with GID 100.
Nov 23 01:40:58 localhost systemd-sysusers[284]: Creating group 'dbus' with GID 81.
Nov 23 01:40:58 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 01:40:58 localhost systemd-sysusers[284]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 23 01:40:58 localhost systemd[1]: Finished Create System Users.
Nov 23 01:40:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 01:40:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 01:40:58 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Nov 23 01:40:58 localhost dracut-cmdline[289]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Nov 23 01:40:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 01:40:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 01:40:58 localhost systemd[1]: Finished dracut cmdline hook.
Nov 23 01:40:58 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 23 01:40:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 23 01:40:58 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 23 01:40:58 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Nov 23 01:40:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 23 01:40:58 localhost kernel: RPC: Registered udp transport module.
Nov 23 01:40:58 localhost kernel: RPC: Registered tcp transport module.
Nov 23 01:40:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 23 01:40:58 localhost rpc.statd[408]: Version 2.5.4 starting
Nov 23 01:40:58 localhost rpc.statd[408]: Initializing NSM state
Nov 23 01:40:58 localhost rpc.idmapd[413]: Setting log level to 0
Nov 23 01:40:58 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 23 01:40:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 01:40:58 localhost systemd-udevd[426]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 01:40:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 01:40:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 23 01:40:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 23 01:40:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 23 01:40:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 23 01:40:58 localhost systemd[1]: Reached target System Initialization.
Nov 23 01:40:58 localhost systemd[1]: Reached target Basic System.
Nov 23 01:40:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 01:40:58 localhost systemd[1]: Reached target Network.
Nov 23 01:40:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 23 01:40:58 localhost systemd[1]: Starting dracut initqueue hook...
Nov 23 01:40:58 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Nov 23 01:40:58 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 23 01:40:58 localhost kernel: GPT:20971519 != 838860799
Nov 23 01:40:58 localhost systemd-udevd[430]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 01:40:58 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Nov 23 01:40:58 localhost kernel: GPT:20971519 != 838860799
Nov 23 01:40:58 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Nov 23 01:40:58 localhost kernel: vda: vda1 vda2 vda3 vda4
Nov 23 01:40:58 localhost kernel: scsi host0: ata_piix
Nov 23 01:40:58 localhost kernel: scsi host1: ata_piix
Nov 23 01:40:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Nov 23 01:40:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Nov 23 01:40:58 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Nov 23 01:40:59 localhost systemd[1]: Reached target Initrd Root Device.
Nov 23 01:40:59 localhost kernel: ata1: found unknown device (class 0)
Nov 23 01:40:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 23 01:40:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 23 01:40:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 23 01:40:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 23 01:40:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 23 01:40:59 localhost systemd[1]: Finished dracut initqueue hook.
Nov 23 01:40:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 01:40:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 23 01:40:59 localhost systemd[1]: Reached target Remote File Systems.
Nov 23 01:40:59 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 23 01:40:59 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 23 01:40:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Nov 23 01:40:59 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system.
Nov 23 01:40:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Nov 23 01:40:59 localhost systemd[1]: Mounting /sysroot...
Nov 23 01:40:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 23 01:40:59 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Nov 23 01:40:59 localhost kernel: XFS (vda4): Ending clean mount
Nov 23 01:40:59 localhost systemd[1]: Mounted /sysroot.
Nov 23 01:40:59 localhost systemd[1]: Reached target Initrd Root File System.
Nov 23 01:40:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 23 01:40:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 23 01:40:59 localhost systemd[1]: Reached target Initrd File Systems.
Nov 23 01:40:59 localhost systemd[1]: Reached target Initrd Default Target.
Nov 23 01:40:59 localhost systemd[1]: Starting dracut mount hook...
Nov 23 01:40:59 localhost systemd[1]: Finished dracut mount hook.
Nov 23 01:40:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 23 01:40:59 localhost rpc.idmapd[413]: exiting on signal 15
Nov 23 01:40:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 23 01:40:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 23 01:40:59 localhost systemd[1]: Stopped target Network.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Timer Units.
Nov 23 01:40:59 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 23 01:40:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Basic System.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Path Units.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Remote File Systems.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Slice Units.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Socket Units.
Nov 23 01:40:59 localhost systemd[1]: Stopped target System Initialization.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Local File Systems.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Swaps.
Nov 23 01:40:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped dracut mount hook.
Nov 23 01:40:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 23 01:40:59 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 23 01:40:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 23 01:40:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 23 01:40:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 23 01:40:59 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped Load Kernel Modules.
Nov 23 01:40:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 23 01:40:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 23 01:40:59 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 23 01:41:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 23 01:41:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 23 01:41:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 23 01:41:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 23 01:41:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 23 01:41:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Closed udev Control Socket.
Nov 23 01:41:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Closed udev Kernel Socket.
Nov 23 01:41:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 23 01:41:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 23 01:41:00 localhost systemd[1]: Starting Cleanup udev Database...
Nov 23 01:41:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 23 01:41:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 23 01:41:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Create System Users.
Nov 23 01:41:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Finished Cleanup udev Database.
Nov 23 01:41:00 localhost systemd[1]: Reached target Switch Root.
Nov 23 01:41:00 localhost systemd[1]: Starting Switch Root...
Nov 23 01:41:00 localhost systemd[1]: Switching root.
Nov 23 01:41:00 localhost systemd-journald[282]: Journal stopped
Nov 23 01:41:00 localhost systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Nov 23 01:41:00 localhost kernel: audit: type=1404 audit(1763880060.229:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 01:41:00 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 01:41:00 localhost kernel: audit: type=1403 audit(1763880060.366:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 23 01:41:00 localhost systemd[1]: Successfully loaded SELinux policy in 142.263ms.
Nov 23 01:41:00 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.890ms.
Nov 23 01:41:00 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 01:41:00 localhost systemd[1]: Detected virtualization kvm.
Nov 23 01:41:00 localhost systemd[1]: Detected architecture x86-64.
Nov 23 01:41:00 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 01:41:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 01:41:00 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped Switch Root.
Nov 23 01:41:00 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 23 01:41:00 localhost systemd[1]: Created slice Slice /system/getty.
Nov 23 01:41:00 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 23 01:41:00 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 23 01:41:00 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 23 01:41:00 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Nov 23 01:41:00 localhost systemd[1]: Created slice User and Session Slice.
Nov 23 01:41:00 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 23 01:41:00 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 23 01:41:00 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 23 01:41:00 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 23 01:41:00 localhost systemd[1]: Stopped target Switch Root.
Nov 23 01:41:00 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 23 01:41:00 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 23 01:41:00 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 23 01:41:00 localhost systemd[1]: Reached target Path Units.
Nov 23 01:41:00 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 23 01:41:00 localhost systemd[1]: Reached target Slice Units.
Nov 23 01:41:00 localhost systemd[1]: Reached target Swaps.
Nov 23 01:41:00 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 23 01:41:00 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 23 01:41:00 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 23 01:41:00 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 23 01:41:00 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 23 01:41:00 localhost systemd[1]: Listening on udev Control Socket.
Nov 23 01:41:00 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 23 01:41:00 localhost systemd[1]: Mounting Huge Pages File System...
Nov 23 01:41:00 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 23 01:41:00 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 23 01:41:00 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 23 01:41:00 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 01:41:00 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 23 01:41:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 23 01:41:00 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 23 01:41:00 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 23 01:41:00 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 23 01:41:00 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 23 01:41:00 localhost systemd[1]: Stopped Journal Service.
Nov 23 01:41:00 localhost systemd[1]: Starting Journal Service...
Nov 23 01:41:00 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 01:41:00 localhost kernel: fuse: init (API version 7.36)
Nov 23 01:41:00 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 23 01:41:00 localhost kernel: ACPI: bus type drm_connector registered
Nov 23 01:41:00 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 23 01:41:00 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 23 01:41:00 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 23 01:41:00 localhost systemd[1]: Mounted Huge Pages File System.
Nov 23 01:41:00 localhost systemd-journald[619]: Journal started
Nov 23 01:41:00 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 8.0M, max 314.7M, 306.7M free.
Nov 23 01:41:00 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 23 01:41:00 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 23 01:41:00 localhost systemd-modules-load[620]: Module 'msr' is built in
Nov 23 01:41:01 localhost systemd[1]: Started Journal Service.
Nov 23 01:41:01 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 23 01:41:01 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 23 01:41:01 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 23 01:41:01 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 23 01:41:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 23 01:41:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 01:41:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 23 01:41:01 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 23 01:41:01 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 23 01:41:01 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 23 01:41:01 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 23 01:41:01 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 23 01:41:01 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 01:41:01 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 23 01:41:01 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 23 01:41:01 localhost systemd[1]: Mounting FUSE Control File System...
Nov 23 01:41:01 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 23 01:41:01 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 01:41:01 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 23 01:41:01 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 23 01:41:01 localhost systemd[1]: Starting Load/Save Random Seed...
Nov 23 01:41:01 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 01:41:01 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 8.0M, max 314.7M, 306.7M free.
Nov 23 01:41:01 localhost systemd-journald[619]: Received client request to flush runtime journal.
Nov 23 01:41:01 localhost systemd[1]: Starting Create System Users...
Nov 23 01:41:01 localhost systemd[1]: Mounted FUSE Control File System.
Nov 23 01:41:01 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 23 01:41:01 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 23 01:41:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 23 01:41:01 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 01:41:01 localhost systemd-sysusers[632]: Creating group 'sgx' with GID 989.
Nov 23 01:41:01 localhost systemd-sysusers[632]: Creating group 'systemd-oom' with GID 988.
Nov 23 01:41:01 localhost systemd-sysusers[632]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Nov 23 01:41:01 localhost systemd[1]: Finished Load/Save Random Seed.
Nov 23 01:41:01 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 23 01:41:01 localhost systemd[1]: Finished Create System Users.
Nov 23 01:41:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 23 01:41:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 23 01:41:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 23 01:41:01 localhost systemd[1]: Set up automount EFI System Partition Automount.
Nov 23 01:41:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 23 01:41:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 01:41:01 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 01:41:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 01:41:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 23 01:41:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 23 01:41:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 23 01:41:01 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 23 01:41:01 localhost systemd-udevd[647]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 01:41:01 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Nov 23 01:41:01 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Nov 23 01:41:01 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Nov 23 01:41:01 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 23 01:41:01 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 23 01:41:01 localhost systemd-fsck[682]: fsck.fat 4.2 (2021-01-31)
Nov 23 01:41:01 localhost systemd-fsck[682]: /dev/vda2: 12 files, 1782/51145 clusters
Nov 23 01:41:01 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Nov 23 01:41:01 localhost kernel: SVM: TSC scaling supported
Nov 23 01:41:01 localhost kernel: kvm: Nested Virtualization enabled
Nov 23 01:41:01 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 23 01:41:01 localhost kernel: SVM: kvm: Nested Paging enabled
Nov 23 01:41:01 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 23 01:41:01 localhost kernel: SVM: LBR virtualization supported
Nov 23 01:41:01 localhost kernel: Console: switching to colour dummy device 80x25
Nov 23 01:41:01 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 23 01:41:01 localhost kernel: [drm] features: -context_init
Nov 23 01:41:01 localhost kernel: [drm] number of scanouts: 1
Nov 23 01:41:01 localhost kernel: [drm] number of cap sets: 0
Nov 23 01:41:01 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Nov 23 01:41:01 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Nov 23 01:41:01 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 23 01:41:01 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 23 01:41:01 localhost systemd[1]: Mounting /boot...
Nov 23 01:41:02 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Nov 23 01:41:02 localhost kernel: XFS (vda3): Ending clean mount
Nov 23 01:41:02 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Nov 23 01:41:02 localhost systemd[1]: Mounted /boot.
Nov 23 01:41:02 localhost systemd[1]: Mounting /boot/efi...
Nov 23 01:41:02 localhost systemd[1]: Mounted /boot/efi.
Nov 23 01:41:02 localhost systemd[1]: Reached target Local File Systems.
Nov 23 01:41:02 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 23 01:41:02 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 23 01:41:02 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 23 01:41:02 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 01:41:02 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 23 01:41:02 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 23 01:41:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 23 01:41:02 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 718 (bootctl)
Nov 23 01:41:02 localhost systemd[1]: Starting File System Check on /dev/vda2...
Nov 23 01:41:02 localhost systemd[1]: Finished File System Check on /dev/vda2.
Nov 23 01:41:02 localhost systemd[1]: Mounting EFI System Partition Automount...
Nov 23 01:41:02 localhost systemd[1]: Mounted EFI System Partition Automount.
Nov 23 01:41:02 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 23 01:41:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 23 01:41:02 localhost systemd[1]: Starting Security Auditing Service...
Nov 23 01:41:02 localhost systemd[1]: Starting RPC Bind...
Nov 23 01:41:02 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 23 01:41:02 localhost auditd[727]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Nov 23 01:41:02 localhost auditd[727]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Nov 23 01:41:02 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 23 01:41:02 localhost systemd[1]: Started RPC Bind.
Nov 23 01:41:02 localhost augenrules[732]: /sbin/augenrules: No change
Nov 23 01:41:02 localhost augenrules[742]: No rules
Nov 23 01:41:02 localhost augenrules[742]: enabled 1
Nov 23 01:41:02 localhost augenrules[742]: failure 1
Nov 23 01:41:02 localhost augenrules[742]: pid 727
Nov 23 01:41:02 localhost augenrules[742]: rate_limit 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_limit 8192
Nov 23 01:41:02 localhost augenrules[742]: lost 0
Nov 23 01:41:02 localhost augenrules[742]: backlog 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time 60000
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time_actual 0
Nov 23 01:41:02 localhost augenrules[742]: enabled 1
Nov 23 01:41:02 localhost augenrules[742]: failure 1
Nov 23 01:41:02 localhost augenrules[742]: pid 727
Nov 23 01:41:02 localhost augenrules[742]: rate_limit 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_limit 8192
Nov 23 01:41:02 localhost augenrules[742]: lost 0
Nov 23 01:41:02 localhost augenrules[742]: backlog 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time 60000
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time_actual 0
Nov 23 01:41:02 localhost augenrules[742]: enabled 1
Nov 23 01:41:02 localhost augenrules[742]: failure 1
Nov 23 01:41:02 localhost augenrules[742]: pid 727
Nov 23 01:41:02 localhost augenrules[742]: rate_limit 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_limit 8192
Nov 23 01:41:02 localhost augenrules[742]: lost 0
Nov 23 01:41:02 localhost augenrules[742]: backlog 0
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time 60000
Nov 23 01:41:02 localhost augenrules[742]: backlog_wait_time_actual 0
Nov 23 01:41:02 localhost systemd[1]: Started Security Auditing Service.
Nov 23 01:41:02 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 23 01:41:02 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 23 01:41:02 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 23 01:41:02 localhost systemd[1]: Starting Update is Completed...
Nov 23 01:41:02 localhost systemd[1]: Finished Update is Completed.
Nov 23 01:41:02 localhost systemd[1]: Reached target System Initialization.
Nov 23 01:41:02 localhost systemd[1]: Started dnf makecache --timer.
Nov 23 01:41:02 localhost systemd[1]: Started Daily rotation of log files.
Nov 23 01:41:02 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 23 01:41:02 localhost systemd[1]: Reached target Timer Units.
Nov 23 01:41:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 23 01:41:02 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 23 01:41:02 localhost systemd[1]: Reached target Socket Units.
Nov 23 01:41:02 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Nov 23 01:41:02 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 23 01:41:02 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 01:41:02 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 23 01:41:02 localhost systemd[1]: Reached target Basic System.
Nov 23 01:41:02 localhost systemd[1]: Starting NTP client/server...
Nov 23 01:41:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 23 01:41:02 localhost systemd[1]: Started irqbalance daemon.
Nov 23 01:41:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 23 01:41:02 localhost journal[752]: Ready
Nov 23 01:41:02 localhost systemd[1]: Starting System Logging Service...
Nov 23 01:41:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 01:41:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 01:41:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 01:41:02 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 01:41:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 23 01:41:02 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 23 01:41:02 localhost systemd[1]: Starting User Login Management...
Nov 23 01:41:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 23 01:41:02 localhost rsyslogd[760]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="760" x-info="https://www.rsyslog.com"] start
Nov 23 01:41:02 localhost rsyslogd[760]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Nov 23 01:41:02 localhost systemd[1]: Started System Logging Service.
Nov 23 01:41:02 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 01:41:02 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data
Nov 23 01:41:02 localhost chronyd[766]: Loaded seccomp filter (level 2)
Nov 23 01:41:02 localhost systemd[1]: Started NTP client/server.
Nov 23 01:41:02 localhost systemd-logind[761]: New seat seat0.
Nov 23 01:41:02 localhost systemd-logind[761]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 01:41:02 localhost systemd-logind[761]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 01:41:02 localhost systemd[1]: Started User Login Management.
Nov 23 01:41:02 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 01:41:03 localhost cloud-init[771]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sun, 23 Nov 2025 06:41:03 +0000. Up 6.34 seconds.
Nov 23 01:41:03 localhost systemd[1]: run-cloud\x2dinit-tmp-tmprofrihd6.mount: Deactivated successfully.
Nov 23 01:41:03 localhost systemd[1]: Starting Hostname Service...
Nov 23 01:41:03 localhost systemd[1]: Started Hostname Service.
Nov 23 01:41:03 localhost systemd-hostnamed[785]: Hostname set to <np0005532586.novalocal> (static)
Nov 23 01:41:03 localhost systemd[1]: Finished Initial cloud-init job (pre-networking).
Nov 23 01:41:03 localhost systemd[1]: Reached target Preparation for Network.
Nov 23 01:41:03 localhost systemd[1]: Starting Network Manager...
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.6959] NetworkManager (version 1.42.2-1.el9) is starting... (boot:366e294d-b3af-42a6-b26e-1cde9989d547)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.6963] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Nov 23 01:41:03 localhost systemd[1]: Started Network Manager.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7006] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 01:41:03 localhost systemd[1]: Reached target Network.
Nov 23 01:41:03 localhost systemd[1]: Starting Network Manager Wait Online...
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7099] manager[0x5557fee35020]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 01:41:03 localhost systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7140] hostname: hostname: using hostnamed
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7140] hostname: static hostname changed from (none) to "np0005532586.novalocal"
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7156] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 01:41:03 localhost systemd[1]: Starting Enable periodic update of entitlement certificates....
Nov 23 01:41:03 localhost systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 01:41:03 localhost systemd[1]: Started Enable periodic update of entitlement certificates..
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7324] manager[0x5557fee35020]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7326] manager[0x5557fee35020]: rfkill: WWAN hardware radio set enabled
Nov 23 01:41:03 localhost systemd[1]: Started GSSAPI Proxy Daemon.
Nov 23 01:41:03 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7427] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7428] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7440] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7442] manager: Networking is enabled by state file
Nov 23 01:41:03 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 23 01:41:03 localhost systemd[1]: Reached target NFS client services.
Nov 23 01:41:03 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7491] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7492] settings: Loaded settings plugin: keyfile (internal)
Nov 23 01:41:03 localhost systemd[1]: Reached target Remote File Systems.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7532] dhcp: init: Using DHCP client 'internal'
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7538] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 01:41:03 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7559] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7567] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7579] device (lo): Activation: starting connection 'lo' (ddc475ad-4acf-4e31-a0fd-535bbde387d9)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7592] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7598] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Nov 23 01:41:03 localhost systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7646] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7651] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7657] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7661] device (eth0): carrier: link connected
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7665] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7674] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7699] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7706] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7708] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7712] manager: NetworkManager state is now CONNECTING
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7716] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7726] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7731] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7790] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 23 01:41:03 localhost systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7798] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7827] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7858] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7862] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7870] device (lo): Activation: successful, device activated.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7879] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7882] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7888] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7893] device (eth0): Activation: successful, device activated.
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7900] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 01:41:03 localhost NetworkManager[790]: <info>  [1763880063.7905] manager: startup complete
Nov 23 01:41:03 localhost systemd[1]: Finished Network Manager Wait Online.
Nov 23 01:41:03 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Nov 23 01:41:04 localhost cloud-init[914]: Cloud-init v. 22.1-9.el9 running 'init' at Sun, 23 Nov 2025 06:41:03 +0000. Up 7.22 seconds.
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |  eth0  | True |        38.102.83.162         | 255.255.255.0 | global | fa:16:3e:e7:d2:09 |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |  eth0  | True | fe80::f816:3eff:fee7:d209/64 |       .       |  link  | fa:16:3e:e7:d2:09 |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 23 01:41:04 localhost cloud-init[914]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 23 01:41:04 localhost systemd[1]: Starting Authorization Manager...
Nov 23 01:41:04 localhost polkitd[1037]: Started polkitd version 0.117
Nov 23 01:41:04 localhost systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 01:41:04 localhost systemd[1]: Started Authorization Manager.
Nov 23 01:41:07 localhost cloud-init[914]: Generating public/private rsa key pair.
Nov 23 01:41:07 localhost cloud-init[914]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 23 01:41:07 localhost cloud-init[914]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 23 01:41:07 localhost cloud-init[914]: The key fingerprint is:
Nov 23 01:41:07 localhost cloud-init[914]: SHA256:2NwXCYoSlntxrwmwAuNIg0XSdSl6VA8zIIzFPmlVWY8 root@np0005532586.novalocal
Nov 23 01:41:07 localhost cloud-init[914]: The key's randomart image is:
Nov 23 01:41:07 localhost cloud-init[914]: +---[RSA 3072]----+
Nov 23 01:41:07 localhost cloud-init[914]: |oO=.==B+. .      |
Nov 23 01:41:07 localhost cloud-init[914]: |=++o*o+*.+ . .   |
Nov 23 01:41:07 localhost cloud-init[914]: |++.=.=.oE.. o    |
Nov 23 01:41:07 localhost cloud-init[914]: |..B +.o+ ..  .   |
Nov 23 01:41:07 localhost cloud-init[914]: | . + ...So. .    |
Nov 23 01:41:07 localhost cloud-init[914]: |        o  .     |
Nov 23 01:41:07 localhost cloud-init[914]: |                 |
Nov 23 01:41:07 localhost cloud-init[914]: |                 |
Nov 23 01:41:07 localhost cloud-init[914]: |                 |
Nov 23 01:41:07 localhost cloud-init[914]: +----[SHA256]-----+
Nov 23 01:41:07 localhost cloud-init[914]: Generating public/private ecdsa key pair.
Nov 23 01:41:07 localhost cloud-init[914]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 23 01:41:07 localhost cloud-init[914]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 23 01:41:07 localhost cloud-init[914]: The key fingerprint is:
Nov 23 01:41:07 localhost cloud-init[914]: SHA256:idTQNm5Lnfu0HRYdNoFEBhvUY1gKBqr/v9GHbVnE/vE root@np0005532586.novalocal
Nov 23 01:41:07 localhost cloud-init[914]: The key's randomart image is:
Nov 23 01:41:07 localhost cloud-init[914]: +---[ECDSA 256]---+
Nov 23 01:41:07 localhost cloud-init[914]: |      .o.o.+B=...|
Nov 23 01:41:07 localhost cloud-init[914]: |      .o= ..==.+ |
Nov 23 01:41:07 localhost cloud-init[914]: |     ..o.o +. o+o|
Nov 23 01:41:07 localhost cloud-init[914]: |    .. .+.o   + .|
Nov 23 01:41:07 localhost cloud-init[914]: |   .  .oS. .   = |
Nov 23 01:41:07 localhost cloud-init[914]: |    .   . o + = +|
Nov 23 01:41:07 localhost cloud-init[914]: |     .   . = O .E|
Nov 23 01:41:07 localhost cloud-init[914]: |      .   . = .  |
Nov 23 01:41:07 localhost cloud-init[914]: |       ..o.      |
Nov 23 01:41:07 localhost cloud-init[914]: +----[SHA256]-----+
Nov 23 01:41:07 localhost cloud-init[914]: Generating public/private ed25519 key pair.
Nov 23 01:41:07 localhost cloud-init[914]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 23 01:41:07 localhost cloud-init[914]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 23 01:41:07 localhost cloud-init[914]: The key fingerprint is:
Nov 23 01:41:07 localhost cloud-init[914]: SHA256:QUMWrJl3pROWrcSZFcQZvPl8cm3k2ELNwYFdQzHg7MU root@np0005532586.novalocal
Nov 23 01:41:07 localhost cloud-init[914]: The key's randomart image is:
Nov 23 01:41:07 localhost cloud-init[914]: +--[ED25519 256]--+
Nov 23 01:41:07 localhost cloud-init[914]: |       o*o X==**=|
Nov 23 01:41:07 localhost cloud-init[914]: |       o..O O..+o|
Nov 23 01:41:07 localhost cloud-init[914]: |       +.o = = E.|
Nov 23 01:41:07 localhost cloud-init[914]: |      + ..= + o +|
Nov 23 01:41:07 localhost cloud-init[914]: |       .S. . = =.|
Nov 23 01:41:07 localhost cloud-init[914]: |              * B|
Nov 23 01:41:07 localhost cloud-init[914]: |               * |
Nov 23 01:41:07 localhost cloud-init[914]: |                 |
Nov 23 01:41:07 localhost cloud-init[914]: |                 |
Nov 23 01:41:07 localhost cloud-init[914]: +----[SHA256]-----+
Nov 23 01:41:07 localhost sm-notify[1133]: Version 2.5.4 starting
Nov 23 01:41:07 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Nov 23 01:41:07 localhost systemd[1]: Reached target Cloud-config availability.
Nov 23 01:41:07 localhost systemd[1]: Reached target Network is Online.
Nov 23 01:41:07 localhost systemd[1]: Starting Apply the settings specified in cloud-config...
Nov 23 01:41:07 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Nov 23 01:41:07 localhost systemd[1]: Starting Crash recovery kernel arming...
Nov 23 01:41:07 localhost systemd[1]: Starting Notify NFS peers of a restart...
Nov 23 01:41:07 localhost systemd[1]: Starting OpenSSH server daemon...
Nov 23 01:41:07 localhost systemd[1]: Starting Permit User Sessions...
Nov 23 01:41:07 localhost systemd[1]: Started Notify NFS peers of a restart.
Nov 23 01:41:07 localhost systemd[1]: Finished Permit User Sessions.
Nov 23 01:41:07 localhost systemd[1]: Started Command Scheduler.
Nov 23 01:41:07 localhost systemd[1]: Started Getty on tty1.
Nov 23 01:41:07 localhost systemd[1]: Started Serial Getty on ttyS0.
Nov 23 01:41:07 localhost systemd[1]: Reached target Login Prompts.
Nov 23 01:41:07 localhost sshd[1134]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost systemd[1]: Started OpenSSH server daemon.
Nov 23 01:41:07 localhost systemd[1]: Reached target Multi-User System.
Nov 23 01:41:07 localhost systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 23 01:41:07 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 23 01:41:07 localhost systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 23 01:41:07 localhost kdumpctl[1137]: kdump: No kdump initial ramdisk found.
Nov 23 01:41:07 localhost kdumpctl[1137]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Nov 23 01:41:07 localhost sshd[1212]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1228]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1246]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1257]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1270]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1290]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost cloud-init[1295]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sun, 23 Nov 2025 06:41:07 +0000. Up 10.57 seconds.
Nov 23 01:41:07 localhost sshd[1344]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1361]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost sshd[1371]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:07 localhost systemd[1]: Finished Apply the settings specified in cloud-config.
Nov 23 01:41:07 localhost systemd[1]: Starting Execute cloud user/final scripts...
Nov 23 01:41:07 localhost dracut[1436]: dracut-057-21.git20230214.el9
Nov 23 01:41:07 localhost dracut[1438]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Nov 23 01:41:07 localhost cloud-init[1487]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sun, 23 Nov 2025 06:41:07 +0000. Up 10.92 seconds.
Nov 23 01:41:07 localhost cloud-init[1558]: #############################################################
Nov 23 01:41:07 localhost cloud-init[1561]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 23 01:41:07 localhost cloud-init[1564]: 256 SHA256:idTQNm5Lnfu0HRYdNoFEBhvUY1gKBqr/v9GHbVnE/vE root@np0005532586.novalocal (ECDSA)
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 23 01:41:07 localhost cloud-init[1574]: 256 SHA256:QUMWrJl3pROWrcSZFcQZvPl8cm3k2ELNwYFdQzHg7MU root@np0005532586.novalocal (ED25519)
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 01:41:07 localhost cloud-init[1582]: 3072 SHA256:2NwXCYoSlntxrwmwAuNIg0XSdSl6VA8zIIzFPmlVWY8 root@np0005532586.novalocal (RSA)
Nov 23 01:41:07 localhost cloud-init[1584]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 23 01:41:07 localhost cloud-init[1586]: #############################################################
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 01:41:07 localhost cloud-init[1487]: Cloud-init v. 22.1-9.el9 finished at Sun, 23 Nov 2025 06:41:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.14 seconds
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 01:41:07 localhost dracut[1438]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 01:41:07 localhost dracut[1438]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost systemd[1]: Reloading Network Manager...
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 01:41:08 localhost NetworkManager[790]: <info>  [1763880068.0172] audit: op="reload" arg="0" pid=1650 uid=0 result="success"
Nov 23 01:41:08 localhost NetworkManager[790]: <info>  [1763880068.0179] config: signal: SIGHUP (no changes from disk)
Nov 23 01:41:08 localhost systemd[1]: Reloaded Network Manager.
Nov 23 01:41:08 localhost systemd[1]: Finished Execute cloud user/final scripts.
Nov 23 01:41:08 localhost systemd[1]: Reached target Cloud-init target.
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: memstrack is not available
Nov 23 01:41:08 localhost dracut[1438]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 23 01:41:08 localhost dracut[1438]: memstrack is not available
Nov 23 01:41:08 localhost dracut[1438]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 23 01:41:08 localhost chronyd[766]: Selected source 167.160.187.12 (2.rhel.pool.ntp.org)
Nov 23 01:41:08 localhost chronyd[766]: System clock TAI offset set to 37 seconds
Nov 23 01:41:08 localhost dracut[1438]: *** Including module: systemd ***
Nov 23 01:41:09 localhost dracut[1438]: *** Including module: systemd-initrd ***
Nov 23 01:41:09 localhost dracut[1438]: *** Including module: i18n ***
Nov 23 01:41:09 localhost dracut[1438]: No KEYMAP configured.
Nov 23 01:41:09 localhost dracut[1438]: *** Including module: drm ***
Nov 23 01:41:09 localhost dracut[1438]: *** Including module: prefixdevname ***
Nov 23 01:41:09 localhost dracut[1438]: *** Including module: kernel-modules ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: kernel-modules-extra ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: qemu ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: fstab-sys ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: rootfs-block ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: terminfo ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: udev-rules ***
Nov 23 01:41:10 localhost dracut[1438]: Skipping udev rule: 91-permissions.rules
Nov 23 01:41:10 localhost dracut[1438]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: virtiofs ***
Nov 23 01:41:10 localhost dracut[1438]: *** Including module: dracut-systemd ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: usrmount ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: base ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: fs-lib ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: kdumpbase ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 23 01:41:11 localhost dracut[1438]:  microcode_ctl module: mangling fw_dir
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 23 01:41:11 localhost dracut[1438]:    microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: shutdown ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including module: squash ***
Nov 23 01:41:11 localhost dracut[1438]: *** Including modules done ***
Nov 23 01:41:11 localhost dracut[1438]: *** Installing kernel module dependencies ***
Nov 23 01:41:12 localhost dracut[1438]: *** Installing kernel module dependencies done ***
Nov 23 01:41:12 localhost dracut[1438]: *** Resolving executable dependencies ***
Nov 23 01:41:13 localhost dracut[1438]: *** Resolving executable dependencies done ***
Nov 23 01:41:13 localhost dracut[1438]: *** Hardlinking files ***
Nov 23 01:41:13 localhost dracut[1438]: Mode:           real
Nov 23 01:41:13 localhost dracut[1438]: Files:          1099
Nov 23 01:41:13 localhost dracut[1438]: Linked:         3 files
Nov 23 01:41:13 localhost dracut[1438]: Compared:       0 xattrs
Nov 23 01:41:13 localhost dracut[1438]: Compared:       373 files
Nov 23 01:41:13 localhost dracut[1438]: Saved:          61.04 KiB
Nov 23 01:41:13 localhost dracut[1438]: Duration:       0.048207 seconds
Nov 23 01:41:13 localhost dracut[1438]: *** Hardlinking files done ***
Nov 23 01:41:13 localhost dracut[1438]: Could not find 'strip'. Not stripping the initramfs.
Nov 23 01:41:13 localhost dracut[1438]: *** Generating early-microcode cpio image ***
Nov 23 01:41:13 localhost dracut[1438]: *** Constructing AuthenticAMD.bin ***
Nov 23 01:41:13 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 01:41:13 localhost dracut[1438]: *** Store current command line parameters ***
Nov 23 01:41:13 localhost dracut[1438]: Stored kernel commandline:
Nov 23 01:41:13 localhost dracut[1438]: No dracut internal kernel commandline stored in the initramfs
Nov 23 01:41:14 localhost dracut[1438]: *** Install squash loader ***
Nov 23 01:41:14 localhost dracut[1438]: *** Squashing the files inside the initramfs ***
Nov 23 01:41:15 localhost dracut[1438]: *** Squashing the files inside the initramfs done ***
Nov 23 01:41:15 localhost dracut[1438]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Nov 23 01:41:16 localhost dracut[1438]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Nov 23 01:41:16 localhost kdumpctl[1137]: kdump: kexec: loaded kdump kernel
Nov 23 01:41:16 localhost kdumpctl[1137]: kdump: Starting kdump: [OK]
Nov 23 01:41:16 localhost systemd[1]: Finished Crash recovery kernel arming.
Nov 23 01:41:16 localhost systemd[1]: Startup finished in 1.259s (kernel) + 2.187s (initrd) + 16.312s (userspace) = 19.759s.
Nov 23 01:41:33 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 01:41:43 localhost sshd[4178]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:41:43 localhost systemd[1]: Created slice User Slice of UID 1000.
Nov 23 01:41:43 localhost systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 23 01:41:43 localhost systemd-logind[761]: New session 1 of user zuul.
Nov 23 01:41:43 localhost systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 23 01:41:43 localhost systemd[1]: Starting User Manager for UID 1000...
Nov 23 01:41:43 localhost systemd[4182]: Queued start job for default target Main User Target.
Nov 23 01:41:43 localhost systemd[4182]: Created slice User Application Slice.
Nov 23 01:41:43 localhost systemd[4182]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 01:41:43 localhost systemd[4182]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 01:41:43 localhost systemd[4182]: Reached target Paths.
Nov 23 01:41:43 localhost systemd[4182]: Reached target Timers.
Nov 23 01:41:43 localhost systemd[4182]: Starting D-Bus User Message Bus Socket...
Nov 23 01:41:43 localhost systemd[4182]: Starting Create User's Volatile Files and Directories...
Nov 23 01:41:43 localhost systemd[4182]: Listening on D-Bus User Message Bus Socket.
Nov 23 01:41:43 localhost systemd[4182]: Reached target Sockets.
Nov 23 01:41:43 localhost systemd[4182]: Finished Create User's Volatile Files and Directories.
Nov 23 01:41:43 localhost systemd[4182]: Reached target Basic System.
Nov 23 01:41:43 localhost systemd[4182]: Reached target Main User Target.
Nov 23 01:41:43 localhost systemd[4182]: Startup finished in 126ms.
Nov 23 01:41:43 localhost systemd[1]: Started User Manager for UID 1000.
Nov 23 01:41:43 localhost systemd[1]: Started Session 1 of User zuul.
Nov 23 01:41:43 localhost python3[4234]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 01:41:54 localhost python3[4252]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 01:42:00 localhost python3[4305]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 01:42:01 localhost python3[4335]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 23 01:42:04 localhost python3[4351]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:04 localhost python3[4365]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:06 localhost python3[4424]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:06 localhost python3[4465]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880125.968078-394-28490125482391/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa follow=False checksum=4877a9422cfa308f85d093f4f170aa8e2f5129bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:07 localhost python3[4538]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:08 localhost python3[4579]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880127.663252-492-155574462101118/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa.pub follow=False checksum=9e6358c9dcdfe108c4f779a3b698bd3c9d97da46 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:10 localhost python3[4607]: ansible-ping Invoked with data=pong
Nov 23 01:42:12 localhost python3[4621]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 01:42:15 localhost python3[4674]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 23 01:42:18 localhost python3[4696]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:18 localhost python3[4710]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:19 localhost python3[4724]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:19 localhost sshd[4725]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:42:20 localhost python3[4739]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:20 localhost python3[4753]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:20 localhost python3[4767]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:23 localhost python3[4785]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:24 localhost python3[4833]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:25 localhost python3[4876]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880144.6161716-105-39559632738137/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:32 localhost python3[4904]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:33 localhost python3[4918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:33 localhost python3[4932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:33 localhost python3[4946]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:33 localhost python3[4960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:34 localhost python3[4974]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:34 localhost python3[4988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:34 localhost python3[5002]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:34 localhost python3[5016]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:35 localhost python3[5030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:35 localhost python3[5044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:35 localhost python3[5058]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:36 localhost python3[5072]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:36 localhost python3[5086]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:36 localhost python3[5100]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:36 localhost python3[5114]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:37 localhost python3[5128]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:37 localhost python3[5142]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:37 localhost python3[5156]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:37 localhost python3[5170]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:38 localhost python3[5184]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:38 localhost python3[5198]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:38 localhost python3[5212]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:38 localhost python3[5226]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:39 localhost python3[5240]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:39 localhost python3[5254]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:42:40 localhost python3[5270]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 01:42:41 localhost systemd[1]: Starting Time & Date Service...
Nov 23 01:42:41 localhost systemd[1]: Started Time & Date Service.
Nov 23 01:42:41 localhost systemd-timedated[5272]: Changed time zone to 'UTC' (UTC).
Nov 23 01:42:42 localhost python3[5291]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:42 localhost sshd[5292]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:42:43 localhost python3[5339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:43 localhost python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763880163.307937-498-241968011284823/source _original_basename=tmpbu8zm871 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:45 localhost python3[5440]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:45 localhost python3[5481]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763880164.7956803-587-122505782585422/source _original_basename=tmpvjb3ap6m follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:47 localhost python3[5543]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:47 localhost python3[5586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763880166.8989449-732-249805970022921/source _original_basename=tmph_i2_xvy follow=False checksum=855cc62dd01fe7364cfd0c3455e65e7a945e3fa8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:48 localhost python3[5614]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:42:48 localhost python3[5630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:42:50 localhost python3[5680]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:42:50 localhost python3[5723]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880169.8520308-859-174799870311670/source _original_basename=tmpo1mr_fze follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:51 localhost python3[5754]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-656c-65aa-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:42:52 localhost python3[5772]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-656c-65aa-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 23 01:42:54 localhost python3[5790]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:42:56 localhost sshd[5791]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:43:11 localhost systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 01:43:14 localhost python3[5811]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:43:24 localhost sshd[5812]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:44:14 localhost systemd-logind[761]: Session 1 logged out. Waiting for processes to exit.
Nov 23 01:44:34 localhost systemd[4182]: Starting Mark boot as successful...
Nov 23 01:44:34 localhost systemd[4182]: Finished Mark boot as successful.
Nov 23 01:45:03 localhost systemd[1]: Unmounting EFI System Partition Automount...
Nov 23 01:45:03 localhost systemd[1]: efi.mount: Deactivated successfully.
Nov 23 01:45:03 localhost systemd[1]: Unmounted EFI System Partition Automount.
Nov 23 01:45:13 localhost sshd[5818]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:46:15 localhost sshd[5820]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:46:39 localhost sshd[5823]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:46:56 localhost sshd[5825]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Nov 23 01:47:20 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Nov 23 01:47:20 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5248] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 01:47:20 localhost systemd-udevd[5827]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5409] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Nov 23 01:47:20 localhost systemd[4182]: Created slice User Background Tasks Slice.
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5451] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5459] device (eth1): carrier: link connected
Nov 23 01:47:20 localhost systemd[4182]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5463] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5472] policy: auto-activating connection 'Wired connection 1' (0b35e19a-9001-3b52-bbf9-919b6eb25ed1)
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5480] device (eth1): Activation: starting connection 'Wired connection 1' (0b35e19a-9001-3b52-bbf9-919b6eb25ed1)
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5482] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5490] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5498] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Nov 23 01:47:20 localhost NetworkManager[790]: <info>  [1763880440.5504] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 01:47:20 localhost systemd[4182]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 01:47:21 localhost sshd[5831]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:47:21 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Nov 23 01:47:21 localhost systemd-logind[761]: New session 3 of user zuul.
Nov 23 01:47:21 localhost systemd[1]: Started Session 3 of User zuul.
Nov 23 01:47:21 localhost python3[5848]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-7c02-8043-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:47:35 localhost python3[5898]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:47:35 localhost python3[5941]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763880454.7850137-537-151660774727075/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=bbb38024f4bcd472100461e0507f640e8a22c9b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:47:36 localhost python3[5971]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 01:47:36 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 23 01:47:36 localhost systemd[1]: Stopped Network Manager Wait Online.
Nov 23 01:47:36 localhost systemd[1]: Stopping Network Manager Wait Online...
Nov 23 01:47:36 localhost systemd[1]: Stopping Network Manager...
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0401] caught SIGTERM, shutting down normally.
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0505] dhcp4 (eth0): canceled DHCP transaction
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0505] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0505] dhcp4 (eth0): state changed no lease
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0511] manager: NetworkManager state is now CONNECTING
Nov 23 01:47:36 localhost systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0665] dhcp4 (eth1): canceled DHCP transaction
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0665] dhcp4 (eth1): state changed no lease
Nov 23 01:47:36 localhost systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 01:47:36 localhost NetworkManager[790]: <info>  [1763880456.0750] exiting (success)
Nov 23 01:47:36 localhost systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 23 01:47:36 localhost systemd[1]: Stopped Network Manager.
Nov 23 01:47:36 localhost systemd[1]: NetworkManager.service: Consumed 2.288s CPU time.
Nov 23 01:47:36 localhost systemd[1]: Starting Network Manager...
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.1388] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:366e294d-b3af-42a6-b26e-1cde9989d547)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.1390] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.1416] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 23 01:47:36 localhost systemd[1]: Started Network Manager.
Nov 23 01:47:36 localhost systemd[1]: Starting Network Manager Wait Online...
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.1470] manager[0x55e576a6f090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 23 01:47:36 localhost systemd[1]: Starting Hostname Service...
Nov 23 01:47:36 localhost systemd[1]: Started Hostname Service.
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2268] hostname: hostname: using hostnamed
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2269] hostname: static hostname changed from (none) to "np0005532586.novalocal"
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2275] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2280] manager[0x55e576a6f090]: rfkill: Wi-Fi hardware radio set enabled
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2281] manager[0x55e576a6f090]: rfkill: WWAN hardware radio set enabled
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2319] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2319] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2320] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2321] manager: Networking is enabled by state file
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2328] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2329] settings: Loaded settings plugin: keyfile (internal)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2379] dhcp: init: Using DHCP client 'internal'
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2383] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2391] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2398] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2411] device (lo): Activation: starting connection 'lo' (ddc475ad-4acf-4e31-a0fd-535bbde387d9)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2422] device (eth0): carrier: link connected
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2430] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2440] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2441] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2453] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2468] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2479] device (eth1): carrier: link connected
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2485] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2493] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0b35e19a-9001-3b52-bbf9-919b6eb25ed1) (indicated)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2493] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2503] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2516] device (eth1): Activation: starting connection 'Wired connection 1' (0b35e19a-9001-3b52-bbf9-919b6eb25ed1)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2545] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2561] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2565] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2568] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2573] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2577] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2582] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2608] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2616] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2619] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2630] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2633] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2651] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2659] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2667] device (lo): Activation: successful, device activated.
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2676] dhcp4 (eth0): state changed new lease, address=38.102.83.162
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2682] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2796] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2835] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2837] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2842] manager: NetworkManager state is now CONNECTED_SITE
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2847] device (eth0): Activation: successful, device activated.
Nov 23 01:47:36 localhost NetworkManager[5990]: <info>  [1763880456.2852] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 23 01:47:36 localhost python3[6044]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-7c02-8043-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:47:46 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 01:48:01 localhost sshd[6057]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:48:06 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 01:48:21 localhost NetworkManager[5990]: <info>  [1763880501.7700] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Nov 23 01:48:21 localhost systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 01:48:21 localhost systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 01:48:21 localhost NetworkManager[5990]: <info>  [1763880501.7922] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Nov 23 01:48:21 localhost NetworkManager[5990]: <info>  [1763880501.7927] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Nov 23 01:48:21 localhost NetworkManager[5990]: <info>  [1763880501.7939] device (eth1): Activation: successful, device activated.
Nov 23 01:48:21 localhost NetworkManager[5990]: <info>  [1763880501.7947] manager: startup complete
Nov 23 01:48:21 localhost systemd[1]: Finished Network Manager Wait Online.
Nov 23 01:48:31 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 01:48:36 localhost systemd[1]: session-3.scope: Deactivated successfully.
Nov 23 01:48:36 localhost systemd[1]: session-3.scope: Consumed 1.441s CPU time.
Nov 23 01:48:36 localhost systemd-logind[761]: Session 3 logged out. Waiting for processes to exit.
Nov 23 01:48:36 localhost systemd-logind[761]: Removed session 3.
Nov 23 01:48:58 localhost sshd[6075]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:48:58 localhost systemd-logind[761]: New session 4 of user zuul.
Nov 23 01:48:58 localhost systemd[1]: Started Session 4 of User zuul.
Nov 23 01:48:59 localhost python3[6126]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:48:59 localhost python3[6169]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880538.7585118-628-36657610773116/source _original_basename=tmp3bvk2s9p follow=False checksum=492625bb7c06d655281f511b293f3f3edc954e6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:49:01 localhost systemd[1]: session-4.scope: Deactivated successfully.
Nov 23 01:49:01 localhost systemd-logind[761]: Session 4 logged out. Waiting for processes to exit.
Nov 23 01:49:01 localhost systemd-logind[761]: Removed session 4.
Nov 23 01:49:02 localhost sshd[6184]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:49:21 localhost sshd[6186]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:50:47 localhost sshd[6188]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:51:03 localhost sshd[6190]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:52:15 localhost sshd[6193]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:52:37 localhost sshd[6195]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:53:05 localhost sshd[6197]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:53:43 localhost sshd[6199]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:54:45 localhost sshd[6202]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:54:45 localhost systemd-logind[761]: New session 5 of user zuul.
Nov 23 01:54:45 localhost systemd[1]: Started Session 5 of User zuul.
Nov 23 01:54:46 localhost python3[6221]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9f63-9a03-000000001cfc-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:47 localhost python3[6240]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:47 localhost python3[6256]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:47 localhost python3[6272]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:48 localhost python3[6288]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:48 localhost python3[6304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:50 localhost python3[6352]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 01:54:50 localhost python3[6395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763880889.892921-644-241822555124174/source _original_basename=tmptsps21hn follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 01:54:52 localhost python3[6425]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 01:54:52 localhost systemd[1]: Reloading.
Nov 23 01:54:52 localhost systemd-rc-local-generator[6443]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 01:54:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 01:54:53 localhost python3[6472]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 23 01:54:55 localhost python3[6488]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:55 localhost python3[6506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:55 localhost python3[6524]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:55 localhost python3[6542]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:57 localhost python3[6559]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-9f63-9a03-000000001d03-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:54:57 localhost python3[6578]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 01:55:00 localhost systemd[1]: session-5.scope: Deactivated successfully.
Nov 23 01:55:00 localhost systemd[1]: session-5.scope: Consumed 3.892s CPU time.
Nov 23 01:55:00 localhost systemd-logind[761]: Session 5 logged out. Waiting for processes to exit.
Nov 23 01:55:00 localhost systemd-logind[761]: Removed session 5.
Nov 23 01:55:03 localhost sshd[6585]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:55:07 localhost sshd[6587]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:56:19 localhost sshd[6590]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:56:19 localhost systemd[1]: Starting Cleanup of Temporary Directories...
Nov 23 01:56:19 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 23 01:56:19 localhost systemd[1]: Finished Cleanup of Temporary Directories.
Nov 23 01:56:19 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 23 01:56:19 localhost systemd-logind[761]: New session 6 of user zuul.
Nov 23 01:56:19 localhost systemd[1]: Started Session 6 of User zuul.
Nov 23 01:56:20 localhost systemd[1]: Starting RHSM dbus service...
Nov 23 01:56:20 localhost systemd[1]: Started RHSM dbus service.
Nov 23 01:56:20 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:20 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:20 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:20 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:22 localhost rhsm-service[6616]: INFO [subscription_manager.managerlib:90] Consumer created: np0005532586.novalocal (1dfa576e-3fe2-4912-9ff8-232d091abdc0)
Nov 23 01:56:22 localhost subscription-manager[6616]: Registered system with identity: 1dfa576e-3fe2-4912-9ff8-232d091abdc0
Nov 23 01:56:23 localhost rhsm-service[6616]: INFO [subscription_manager.entcertlib:131] certs updated:
Nov 23 01:56:23 localhost rhsm-service[6616]: Total updates: 1
Nov 23 01:56:23 localhost rhsm-service[6616]: Found (local) serial# []
Nov 23 01:56:23 localhost rhsm-service[6616]: Expected (UEP) serial# [2387262294322638554]
Nov 23 01:56:23 localhost rhsm-service[6616]: Added (new)
Nov 23 01:56:23 localhost rhsm-service[6616]:  [sn:2387262294322638554 ( Content Access,) @ /etc/pki/entitlement/2387262294322638554.pem]
Nov 23 01:56:23 localhost rhsm-service[6616]: Deleted (rogue):
Nov 23 01:56:23 localhost rhsm-service[6616]:  <NONE>
Nov 23 01:56:23 localhost subscription-manager[6616]: Added subscription for 'Content Access' contract 'None'
Nov 23 01:56:23 localhost subscription-manager[6616]: Added subscription for product ' Content Access'
Nov 23 01:56:24 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:24 localhost rhsm-service[6616]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Nov 23 01:56:24 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:56:24 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:56:24 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:56:25 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:56:25 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:56:27 localhost python3[6707]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-d6e0-fafb-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 01:56:32 localhost sshd[6711]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:56:59 localhost sshd[6713]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:57:06 localhost sshd[6714]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:57:20 localhost python3[6731]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 01:57:49 localhost setsebool[6807]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 23 01:57:49 localhost setsebool[6807]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 23 01:57:57 localhost kernel: SELinux:  Converting 406 SID table entries...
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 01:57:57 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 01:58:01 localhost sshd[7552]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:10 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Nov 23 01:58:10 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 01:58:10 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 01:58:10 localhost systemd[1]: Reloading.
Nov 23 01:58:10 localhost systemd-rc-local-generator[7672]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 01:58:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 01:58:10 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 01:58:11 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:58:11 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 01:58:16 localhost podman[14381]: 2025-11-23 06:58:16.04923177 +0000 UTC m=+0.115867332 system refresh
Nov 23 01:58:16 localhost systemd[4182]: Starting D-Bus User Message Bus...
Nov 23 01:58:16 localhost dbus-broker-launch[15940]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 23 01:58:16 localhost dbus-broker-launch[15940]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 23 01:58:16 localhost systemd[4182]: Started D-Bus User Message Bus.
Nov 23 01:58:16 localhost journal[15940]: Ready
Nov 23 01:58:16 localhost systemd[4182]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Nov 23 01:58:16 localhost systemd[4182]: Created slice Slice /user.
Nov 23 01:58:16 localhost systemd[4182]: podman-15822.scope: unit configures an IP firewall, but not running as root.
Nov 23 01:58:16 localhost systemd[4182]: (This warning is only shown for the first unit using IP firewalling.)
Nov 23 01:58:16 localhost systemd[4182]: Started podman-15822.scope.
Nov 23 01:58:17 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 01:58:17 localhost systemd[4182]: Started podman-pause-537e249f.scope.
Nov 23 01:58:17 localhost systemd[1]: session-6.scope: Deactivated successfully.
Nov 23 01:58:17 localhost systemd[1]: session-6.scope: Consumed 49.545s CPU time.
Nov 23 01:58:17 localhost systemd-logind[761]: Session 6 logged out. Waiting for processes to exit.
Nov 23 01:58:17 localhost systemd-logind[761]: Removed session 6.
Nov 23 01:58:18 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 01:58:18 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 01:58:18 localhost systemd[1]: man-db-cache-update.service: Consumed 9.190s CPU time.
Nov 23 01:58:18 localhost systemd[1]: run-r5a9962f059b242ae9e5e6e1995dddea4.service: Deactivated successfully.
Nov 23 01:58:33 localhost sshd[18468]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:33 localhost sshd[18467]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:33 localhost sshd[18466]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:33 localhost sshd[18465]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:33 localhost sshd[18464]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:38 localhost sshd[18475]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:58:38 localhost systemd-logind[761]: New session 7 of user zuul.
Nov 23 01:58:38 localhost systemd[1]: Started Session 7 of User zuul.
Nov 23 01:58:39 localhost python3[18492]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFp5ffFMuM9GICal8e/QjT+yRcsIbGaMWRt/HA7rb1TB5YKChpgkSmzIFogHU4gX8uce12LB+CRf7ndL6kzcKrg= zuul@np0005532578.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:58:39 localhost python3[18508]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFp5ffFMuM9GICal8e/QjT+yRcsIbGaMWRt/HA7rb1TB5YKChpgkSmzIFogHU4gX8uce12LB+CRf7ndL6kzcKrg= zuul@np0005532578.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 01:58:41 localhost systemd[1]: session-7.scope: Deactivated successfully.
Nov 23 01:58:41 localhost systemd-logind[761]: Session 7 logged out. Waiting for processes to exit.
Nov 23 01:58:41 localhost systemd-logind[761]: Removed session 7.
Nov 23 01:59:14 localhost sshd[18509]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 01:59:31 localhost sshd[18511]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:00:08 localhost sshd[18514]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:00:08 localhost systemd-logind[761]: New session 8 of user zuul.
Nov 23 02:00:08 localhost systemd[1]: Started Session 8 of User zuul.
Nov 23 02:00:08 localhost python3[18533]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 02:00:09 localhost python3[18549]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 02:00:11 localhost python3[18599]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:00:11 localhost python3[18642]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763881210.7764955-137-144462151854107/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa follow=False checksum=4877a9422cfa308f85d093f4f170aa8e2f5129bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:12 localhost python3[18704]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:00:12 localhost python3[18747]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763881212.3876748-228-153378262863608/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=4c10693c6dd746ababb601eafce6af01_id_rsa.pub follow=False checksum=9e6358c9dcdfe108c4f779a3b698bd3c9d97da46 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:14 localhost python3[18777]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:16 localhost python3[18823]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:00:16 localhost python3[18839]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpt2mtucx4 recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:17 localhost python3[18899]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:00:17 localhost python3[18915]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmp79mszewf recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:19 localhost python3[18975]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:00:19 localhost python3[18991]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmplnaj0s8c recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:00:19 localhost systemd[1]: session-8.scope: Deactivated successfully.
Nov 23 02:00:19 localhost systemd[1]: session-8.scope: Consumed 3.519s CPU time.
Nov 23 02:00:19 localhost systemd-logind[761]: Session 8 logged out. Waiting for processes to exit.
Nov 23 02:00:19 localhost systemd-logind[761]: Removed session 8.
Nov 23 02:00:21 localhost sshd[19007]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:01:02 localhost sshd[19024]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:01:21 localhost sshd[19026]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:02:29 localhost sshd[19028]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:02:33 localhost sshd[19030]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:02:33 localhost systemd-logind[761]: New session 9 of user zuul.
Nov 23 02:02:33 localhost systemd[1]: Started Session 9 of User zuul.
Nov 23 02:02:33 localhost python3[19076]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:03:26 localhost sshd[19079]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:03:29 localhost systemd[1]: Starting dnf makecache...
Nov 23 02:03:29 localhost dnf[19081]: Updating Subscription Management repositories.
Nov 23 02:03:30 localhost dnf[19081]: Failed determining last makecache time.
Nov 23 02:03:31 localhost dnf[19081]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   30 kB/s | 4.1 kB     00:00
Nov 23 02:03:31 localhost dnf[19081]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  32 kB/s | 4.5 kB     00:00
Nov 23 02:03:31 localhost dnf[19081]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  33 kB/s | 4.5 kB     00:00
Nov 23 02:03:31 localhost dnf[19081]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   30 kB/s | 4.1 kB     00:00
Nov 23 02:03:32 localhost dnf[19081]: Metadata cache created.
Nov 23 02:03:32 localhost systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 02:03:32 localhost systemd[1]: Finished dnf makecache.
Nov 23 02:03:32 localhost systemd[1]: dnf-makecache.service: Consumed 2.544s CPU time.
Nov 23 02:03:56 localhost sshd[19086]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:05:27 localhost sshd[19088]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:05:45 localhost sshd[19090]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:06:52 localhost sshd[19092]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:07:33 localhost systemd[1]: session-9.scope: Deactivated successfully.
Nov 23 02:07:33 localhost systemd-logind[761]: Session 9 logged out. Waiting for processes to exit.
Nov 23 02:07:33 localhost systemd-logind[761]: Removed session 9.
Nov 23 02:07:52 localhost sshd[19095]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:08:18 localhost sshd[19097]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:09:43 localhost sshd[19100]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:09:56 localhost sshd[19102]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:11:12 localhost sshd[19105]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:11:56 localhost sshd[19107]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:12:00 localhost sshd[19109]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:12:46 localhost sshd[19111]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:13:38 localhost sshd[19114]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:13:39 localhost systemd-logind[761]: New session 10 of user zuul.
Nov 23 02:13:39 localhost systemd[1]: Started Session 10 of User zuul.
Nov 23 02:13:39 localhost python3[19131]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:13:42 localhost python3[19151]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:14:04 localhost sshd[19159]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:14:13 localhost python3[19176]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False
Nov 23 02:14:16 localhost sshd[19238]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:14:16 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:14:16 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:14:47 localhost python3[19335]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False
Nov 23 02:14:50 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:09 localhost python3[19476]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False
Nov 23 02:15:11 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:11 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:17 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:17 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:40 localhost python3[19812]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Nov 23 02:15:43 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:43 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:44 localhost sshd[19880]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:15:48 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:15:48 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:16:11 localhost sshd[20076]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:16:12 localhost python3[20093]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Nov 23 02:16:15 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:16:21 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:16:21 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:16:32 localhost python3[20430]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:16:59 localhost python3[20449]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:17:10 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 23 02:17:13 localhost sshd[20561]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:17:19 localhost kernel: SELinux:  Converting 499 SID table entries...
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:17:19 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:17:22 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Nov 23 02:17:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:17:22 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:17:22 localhost systemd[1]: Reloading.
Nov 23 02:17:23 localhost systemd-rc-local-generator[21116]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:17:23 localhost systemd-sysv-generator[21122]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:17:23 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:17:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:17:23 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:17:23 localhost systemd[1]: run-r4d71ca14ef394574bf935a25f239e656.service: Deactivated successfully.
Nov 23 02:17:24 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:17:24 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 02:17:41 localhost python3[21751]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:17:55 localhost python3[21771]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:17:56 localhost python3[21819]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:17:57 localhost python3[21862]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763882276.451495-334-276709012058935/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:17:59 localhost python3[21892]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Nov 23 02:17:59 localhost systemd-journald[619]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation.
Nov 23 02:17:59 localhost systemd-journald[619]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 02:17:59 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 02:17:59 localhost python3[21913]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Nov 23 02:17:59 localhost python3[21933]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Nov 23 02:17:59 localhost python3[21953]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Nov 23 02:18:00 localhost python3[21973]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Nov 23 02:18:02 localhost python3[21993]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:18:03 localhost systemd[1]: Starting LSB: Bring up/down networking...
Nov 23 02:18:03 localhost network[21996]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:03 localhost network[22007]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:03 localhost network[21996]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:03 localhost network[22008]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:03 localhost network[21996]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 02:18:03 localhost network[22009]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 02:18:03 localhost NetworkManager[5990]: <info>  [1763882283.1249] audit: op="connections-reload" pid=22037 uid=0 result="success"
Nov 23 02:18:03 localhost network[21996]: Bringing up loopback interface:  [  OK  ]
Nov 23 02:18:03 localhost NetworkManager[5990]: <info>  [1763882283.3147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22125 uid=0 result="success"
Nov 23 02:18:03 localhost network[21996]: Bringing up interface eth0:  [  OK  ]
Nov 23 02:18:03 localhost systemd[1]: Started LSB: Bring up/down networking.
Nov 23 02:18:03 localhost python3[22166]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:18:03 localhost systemd[1]: Starting Open vSwitch Database Unit...
Nov 23 02:18:03 localhost chown[22170]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 23 02:18:03 localhost ovs-ctl[22175]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 23 02:18:03 localhost ovs-ctl[22175]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 23 02:18:03 localhost ovs-ctl[22175]: Starting ovsdb-server [  OK  ]
Nov 23 02:18:03 localhost ovs-vsctl[22224]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 23 02:18:04 localhost ovs-vsctl[22244]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"9b5ed7a7-8af8-41a0-a5ff-546625cecbf9\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Nov 23 02:18:04 localhost ovs-ctl[22175]: Configuring Open vSwitch system IDs [  OK  ]
Nov 23 02:18:04 localhost ovs-ctl[22175]: Enabling remote OVSDB managers [  OK  ]
Nov 23 02:18:04 localhost systemd[1]: Started Open vSwitch Database Unit.
Nov 23 02:18:04 localhost ovs-vsctl[22250]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005532586.novalocal
Nov 23 02:18:04 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 23 02:18:04 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 23 02:18:04 localhost systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 23 02:18:04 localhost kernel: openvswitch: Open vSwitch switching datapath
Nov 23 02:18:04 localhost ovs-ctl[22294]: Inserting openvswitch module [  OK  ]
Nov 23 02:18:04 localhost ovs-ctl[22263]: Starting ovs-vswitchd [  OK  ]
Nov 23 02:18:04 localhost ovs-ctl[22263]: Enabling remote OVSDB managers [  OK  ]
Nov 23 02:18:04 localhost ovs-vsctl[22312]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005532586.novalocal
Nov 23 02:18:04 localhost systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 23 02:18:04 localhost systemd[1]: Starting Open vSwitch...
Nov 23 02:18:04 localhost systemd[1]: Finished Open vSwitch.
Nov 23 02:18:22 localhost sshd[22315]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:18:34 localhost python3[22332]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.8186] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22491 uid=0 result="success"
Nov 23 02:18:35 localhost ifup[22492]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:35 localhost ifup[22493]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:35 localhost ifup[22494]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.8511] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22500 uid=0 result="success"
Nov 23 02:18:35 localhost ovs-vsctl[22502]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:e4:75:25 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex
Nov 23 02:18:35 localhost kernel: device ovs-system entered promiscuous mode
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.8799] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Nov 23 02:18:35 localhost kernel: Timeout policy base is empty
Nov 23 02:18:35 localhost kernel: Failed to associated timeout policy `ovs_test_tp'
Nov 23 02:18:35 localhost systemd-udevd[22503]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:35 localhost systemd-udevd[22518]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:35 localhost kernel: device br-ex entered promiscuous mode
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.9245] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.9524] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22529 uid=0 result="success"
Nov 23 02:18:35 localhost NetworkManager[5990]: <info>  [1763882315.9728] device (br-ex): carrier: link connected
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.0272] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22558 uid=0 result="success"
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.0777] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22573 uid=0 result="success"
Nov 23 02:18:39 localhost NET[22598]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.1703] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.1835] dhcp4 (eth1): canceled DHCP transaction
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.1835] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.1835] dhcp4 (eth1): state changed no lease
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.1888] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22607 uid=0 result="success"
Nov 23 02:18:39 localhost ifup[22608]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:39 localhost systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 02:18:39 localhost ifup[22609]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:39 localhost ifup[22611]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:39 localhost systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.2323] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22625 uid=0 result="success"
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.2806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22635 uid=0 result="success"
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.2879] device (eth1): carrier: link connected
Nov 23 02:18:39 localhost NetworkManager[5990]: <info>  [1763882319.3111] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22644 uid=0 result="success"
Nov 23 02:18:39 localhost ipv6_wait_tentative[22656]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Nov 23 02:18:40 localhost ipv6_wait_tentative[22661]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Nov 23 02:18:40 localhost sshd[22663]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.3862] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22672 uid=0 result="success"
Nov 23 02:18:41 localhost ovs-vsctl[22687]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1
Nov 23 02:18:41 localhost kernel: device eth1 entered promiscuous mode
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.4593] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22695 uid=0 result="success"
Nov 23 02:18:41 localhost ifup[22696]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:41 localhost ifup[22697]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:41 localhost ifup[22698]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.4901] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22704 uid=0 result="success"
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.5347] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22714 uid=0 result="success"
Nov 23 02:18:41 localhost ifup[22715]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:41 localhost ifup[22716]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:41 localhost ifup[22717]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.5665] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22723 uid=0 result="success"
Nov 23 02:18:41 localhost ovs-vsctl[22726]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Nov 23 02:18:41 localhost kernel: device vlan20 entered promiscuous mode
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.6072] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Nov 23 02:18:41 localhost systemd-udevd[22728]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.6352] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22737 uid=0 result="success"
Nov 23 02:18:41 localhost NetworkManager[5990]: <info>  [1763882321.6564] device (vlan20): carrier: link connected
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.7165] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22766 uid=0 result="success"
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.7626] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22781 uid=0 result="success"
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.8228] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22802 uid=0 result="success"
Nov 23 02:18:44 localhost ifup[22803]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:44 localhost ifup[22804]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:44 localhost ifup[22805]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.8550] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22811 uid=0 result="success"
Nov 23 02:18:44 localhost ovs-vsctl[22814]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Nov 23 02:18:44 localhost kernel: device vlan21 entered promiscuous mode
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.9313] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Nov 23 02:18:44 localhost systemd-udevd[22816]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.9572] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22826 uid=0 result="success"
Nov 23 02:18:44 localhost NetworkManager[5990]: <info>  [1763882324.9786] device (vlan21): carrier: link connected
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.0334] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22856 uid=0 result="success"
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.0808] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22871 uid=0 result="success"
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.1385] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22892 uid=0 result="success"
Nov 23 02:18:48 localhost ifup[22893]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:48 localhost ifup[22894]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:48 localhost ifup[22895]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.1690] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22901 uid=0 result="success"
Nov 23 02:18:48 localhost ovs-vsctl[22904]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Nov 23 02:18:48 localhost systemd-udevd[22906]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:48 localhost kernel: device vlan22 entered promiscuous mode
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.2082] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.2319] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22916 uid=0 result="success"
Nov 23 02:18:48 localhost NetworkManager[5990]: <info>  [1763882328.2512] device (vlan22): carrier: link connected
Nov 23 02:18:49 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.2999] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22946 uid=0 result="success"
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.3463] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22961 uid=0 result="success"
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.4032] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22982 uid=0 result="success"
Nov 23 02:18:51 localhost ifup[22983]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:51 localhost ifup[22984]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:51 localhost ifup[22985]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.4349] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22991 uid=0 result="success"
Nov 23 02:18:51 localhost ovs-vsctl[22994]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Nov 23 02:18:51 localhost kernel: device vlan44 entered promiscuous mode
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.4724] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Nov 23 02:18:51 localhost systemd-udevd[22996]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.4988] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23006 uid=0 result="success"
Nov 23 02:18:51 localhost NetworkManager[5990]: <info>  [1763882331.5200] device (vlan44): carrier: link connected
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.5790] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23036 uid=0 result="success"
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.6251] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23051 uid=0 result="success"
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.6828] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23072 uid=0 result="success"
Nov 23 02:18:54 localhost ifup[23073]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:54 localhost ifup[23074]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:54 localhost ifup[23075]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.7134] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23081 uid=0 result="success"
Nov 23 02:18:54 localhost ovs-vsctl[23084]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.7532] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Nov 23 02:18:54 localhost kernel: device vlan23 entered promiscuous mode
Nov 23 02:18:54 localhost systemd-udevd[23086]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.7793] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23096 uid=0 result="success"
Nov 23 02:18:54 localhost NetworkManager[5990]: <info>  [1763882334.7998] device (vlan23): carrier: link connected
Nov 23 02:18:57 localhost NetworkManager[5990]: <info>  [1763882337.8576] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23126 uid=0 result="success"
Nov 23 02:18:57 localhost NetworkManager[5990]: <info>  [1763882337.9065] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23141 uid=0 result="success"
Nov 23 02:18:57 localhost NetworkManager[5990]: <info>  [1763882337.9694] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23162 uid=0 result="success"
Nov 23 02:18:57 localhost ifup[23163]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:57 localhost ifup[23164]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:57 localhost ifup[23165]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:58 localhost NetworkManager[5990]: <info>  [1763882338.0019] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23171 uid=0 result="success"
Nov 23 02:18:58 localhost ovs-vsctl[23174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Nov 23 02:18:58 localhost NetworkManager[5990]: <info>  [1763882338.0619] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23181 uid=0 result="success"
Nov 23 02:18:59 localhost NetworkManager[5990]: <info>  [1763882339.1270] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23208 uid=0 result="success"
Nov 23 02:18:59 localhost NetworkManager[5990]: <info>  [1763882339.1765] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23223 uid=0 result="success"
Nov 23 02:18:59 localhost NetworkManager[5990]: <info>  [1763882339.2360] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23244 uid=0 result="success"
Nov 23 02:18:59 localhost ifup[23245]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:18:59 localhost ifup[23246]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:18:59 localhost ifup[23247]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:18:59 localhost NetworkManager[5990]: <info>  [1763882339.2698] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23253 uid=0 result="success"
Nov 23 02:18:59 localhost ovs-vsctl[23256]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Nov 23 02:18:59 localhost NetworkManager[5990]: <info>  [1763882339.3243] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23263 uid=0 result="success"
Nov 23 02:19:00 localhost NetworkManager[5990]: <info>  [1763882340.3869] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23291 uid=0 result="success"
Nov 23 02:19:00 localhost NetworkManager[5990]: <info>  [1763882340.4296] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23306 uid=0 result="success"
Nov 23 02:19:00 localhost NetworkManager[5990]: <info>  [1763882340.4840] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23327 uid=0 result="success"
Nov 23 02:19:00 localhost ifup[23328]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:19:00 localhost ifup[23329]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:19:00 localhost ifup[23330]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:19:00 localhost NetworkManager[5990]: <info>  [1763882340.5128] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23336 uid=0 result="success"
Nov 23 02:19:00 localhost ovs-vsctl[23339]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Nov 23 02:19:00 localhost NetworkManager[5990]: <info>  [1763882340.5682] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23346 uid=0 result="success"
Nov 23 02:19:01 localhost NetworkManager[5990]: <info>  [1763882341.6248] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23374 uid=0 result="success"
Nov 23 02:19:01 localhost NetworkManager[5990]: <info>  [1763882341.6736] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23389 uid=0 result="success"
Nov 23 02:19:01 localhost NetworkManager[5990]: <info>  [1763882341.7267] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23410 uid=0 result="success"
Nov 23 02:19:01 localhost ifup[23411]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:19:01 localhost ifup[23412]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:19:01 localhost ifup[23413]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:19:01 localhost NetworkManager[5990]: <info>  [1763882341.7542] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23419 uid=0 result="success"
Nov 23 02:19:01 localhost ovs-vsctl[23422]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Nov 23 02:19:01 localhost NetworkManager[5990]: <info>  [1763882341.8129] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23429 uid=0 result="success"
Nov 23 02:19:02 localhost NetworkManager[5990]: <info>  [1763882342.8694] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23457 uid=0 result="success"
Nov 23 02:19:02 localhost NetworkManager[5990]: <info>  [1763882342.9167] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23472 uid=0 result="success"
Nov 23 02:19:02 localhost NetworkManager[5990]: <info>  [1763882342.9748] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23493 uid=0 result="success"
Nov 23 02:19:02 localhost ifup[23494]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Nov 23 02:19:02 localhost ifup[23495]: 'network-scripts' will be removed from distribution in near future.
Nov 23 02:19:02 localhost ifup[23496]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Nov 23 02:19:03 localhost NetworkManager[5990]: <info>  [1763882343.0053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23502 uid=0 result="success"
Nov 23 02:19:03 localhost ovs-vsctl[23505]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Nov 23 02:19:03 localhost NetworkManager[5990]: <info>  [1763882343.0616] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23512 uid=0 result="success"
Nov 23 02:19:04 localhost NetworkManager[5990]: <info>  [1763882344.1243] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23540 uid=0 result="success"
Nov 23 02:19:04 localhost NetworkManager[5990]: <info>  [1763882344.1711] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23555 uid=0 result="success"
Nov 23 02:19:29 localhost python3[23588]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:19:34 localhost python3[23607]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 02:19:34 localhost python3[23623]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 02:19:36 localhost python3[23637]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 02:19:36 localhost python3[23653]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 23 02:19:37 localhost python3[23667]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname
Nov 23 02:19:37 localhost python3[23682]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005532586.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:19:38 localhost python3[23702]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1b67-9ac8-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:19:38 localhost systemd[1]: Starting Hostname Service...
Nov 23 02:19:38 localhost systemd[1]: Started Hostname Service.
Nov 23 02:19:38 localhost systemd-hostnamed[23706]: Hostname set to <np0005532586.localdomain> (static)
Nov 23 02:19:38 localhost NetworkManager[5990]: <info>  [1763882378.9807] hostname: static hostname changed from "np0005532586.novalocal" to "np0005532586.localdomain"
Nov 23 02:19:38 localhost systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 23 02:19:39 localhost systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 23 02:19:40 localhost systemd[1]: session-10.scope: Deactivated successfully.
Nov 23 02:19:40 localhost systemd[1]: session-10.scope: Consumed 1min 44.282s CPU time.
Nov 23 02:19:40 localhost systemd-logind[761]: Session 10 logged out. Waiting for processes to exit.
Nov 23 02:19:40 localhost systemd-logind[761]: Removed session 10.
Nov 23 02:19:43 localhost sshd[23717]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:19:43 localhost systemd-logind[761]: New session 11 of user zuul.
Nov 23 02:19:43 localhost systemd[1]: Started Session 11 of User zuul.
Nov 23 02:19:43 localhost python3[23734]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Nov 23 02:19:45 localhost systemd[1]: session-11.scope: Deactivated successfully.
Nov 23 02:19:45 localhost systemd-logind[761]: Session 11 logged out. Waiting for processes to exit.
Nov 23 02:19:45 localhost systemd-logind[761]: Removed session 11.
Nov 23 02:19:49 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 23 02:20:07 localhost sshd[23736]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:20:09 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 02:20:32 localhost sshd[23740]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:20:38 localhost sshd[23742]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:20:38 localhost systemd-logind[761]: New session 12 of user zuul.
Nov 23 02:20:38 localhost systemd[1]: Started Session 12 of User zuul.
Nov 23 02:20:38 localhost python3[23761]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:20:42 localhost systemd[1]: Reloading.
Nov 23 02:20:42 localhost systemd-rc-local-generator[23805]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:20:42 localhost systemd-sysv-generator[23809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:20:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:20:42 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 23 02:20:42 localhost systemd[1]: Reloading.
Nov 23 02:20:42 localhost systemd-sysv-generator[23850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:20:42 localhost systemd-rc-local-generator[23847]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:20:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:20:43 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 23 02:20:43 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 23 02:20:43 localhost systemd[1]: Reloading.
Nov 23 02:20:43 localhost systemd-rc-local-generator[23884]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:20:43 localhost systemd-sysv-generator[23890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:20:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:20:43 localhost systemd[1]: Listening on LVM2 poll daemon socket.
Nov 23 02:20:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:20:43 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:20:43 localhost systemd[1]: Reloading.
Nov 23 02:20:43 localhost systemd-rc-local-generator[23948]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:20:43 localhost systemd-sysv-generator[23951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:20:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:20:43 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:20:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:20:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:20:44 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:20:44 localhost systemd[1]: run-r659840262f8c4b6eb6a00e151bf93319.service: Deactivated successfully.
Nov 23 02:20:44 localhost systemd[1]: run-r02092e9451f14172be7dc47fcd74d8c6.service: Deactivated successfully.
Nov 23 02:21:32 localhost sshd[24535]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:21:44 localhost systemd[1]: session-12.scope: Deactivated successfully.
Nov 23 02:21:44 localhost systemd[1]: session-12.scope: Consumed 4.574s CPU time.
Nov 23 02:21:44 localhost systemd-logind[761]: Session 12 logged out. Waiting for processes to exit.
Nov 23 02:21:44 localhost systemd-logind[761]: Removed session 12.
Nov 23 02:22:44 localhost sshd[24539]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:22:46 localhost sshd[24541]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:22:55 localhost sshd[24543]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:24:25 localhost sshd[24545]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:24:56 localhost sshd[24548]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:25:56 localhost sshd[24550]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:27:03 localhost sshd[24552]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:27:25 localhost sshd[24554]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:28:55 localhost sshd[24556]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:29:10 localhost sshd[24559]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:30:25 localhost sshd[24562]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:31:18 localhost sshd[24564]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:31:53 localhost sshd[24566]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:33:18 localhost sshd[24568]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:33:21 localhost sshd[24570]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:34:22 localhost sshd[24572]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:34:42 localhost sshd[24575]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:35:25 localhost sshd[24578]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:37:15 localhost sshd[24581]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:37:15 localhost systemd-logind[761]: New session 13 of user zuul.
Nov 23 02:37:15 localhost systemd[1]: Started Session 13 of User zuul.
Nov 23 02:37:15 localhost python3[24629]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 02:37:17 localhost python3[24716]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:37:20 localhost python3[24733]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:37:21 localhost python3[24749]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:21 localhost kernel: loop: module loaded
Nov 23 02:37:21 localhost kernel: loop3: detected capacity change from 0 to 14680064
Nov 23 02:37:21 localhost python3[24774]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:21 localhost lvm[24777]: PV /dev/loop3 not used.
Nov 23 02:37:21 localhost lvm[24779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 02:37:21 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 23 02:37:21 localhost lvm[24787]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 23 02:37:21 localhost lvm[24789]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 02:37:21 localhost lvm[24789]: VG ceph_vg0 finished
Nov 23 02:37:21 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 23 02:37:22 localhost python3[24837]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:37:23 localhost python3[24880]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883442.3120081-55246-88274094984958/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:23 localhost python3[24910]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:37:23 localhost systemd[1]: Reloading.
Nov 23 02:37:24 localhost systemd-rc-local-generator[24933]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:37:24 localhost systemd-sysv-generator[24940]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:37:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:37:24 localhost systemd[1]: Starting Ceph OSD losetup...
Nov 23 02:37:24 localhost bash[24951]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img)
Nov 23 02:37:24 localhost systemd[1]: Finished Ceph OSD losetup.
Nov 23 02:37:24 localhost lvm[24952]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 02:37:24 localhost lvm[24952]: VG ceph_vg0 finished
Nov 23 02:37:24 localhost python3[24969]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:37:27 localhost python3[24986]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:37:28 localhost python3[25002]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:28 localhost kernel: loop4: detected capacity change from 0 to 14680064
Nov 23 02:37:28 localhost python3[25024]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:28 localhost lvm[25027]: PV /dev/loop4 not used.
Nov 23 02:37:28 localhost lvm[25029]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 02:37:29 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 23 02:37:29 localhost lvm[25038]:  1 logical volume(s) in volume group "ceph_vg1" now active
Nov 23 02:37:29 localhost lvm[25040]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 02:37:29 localhost lvm[25040]: VG ceph_vg1 finished
Nov 23 02:37:29 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 23 02:37:29 localhost python3[25088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:37:30 localhost python3[25131]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883449.3892841-55450-213640888513467/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:30 localhost python3[25161]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:37:31 localhost systemd[1]: Reloading.
Nov 23 02:37:31 localhost systemd-sysv-generator[25194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:37:31 localhost systemd-rc-local-generator[25191]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:37:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:37:32 localhost systemd[1]: Starting Ceph OSD losetup...
Nov 23 02:37:32 localhost bash[25203]: /dev/loop4: [64516]:8606977 (/var/lib/ceph-osd-1.img)
Nov 23 02:37:32 localhost systemd[1]: Finished Ceph OSD losetup.
Nov 23 02:37:32 localhost lvm[25205]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 02:37:32 localhost lvm[25205]: VG ceph_vg1 finished
Nov 23 02:37:33 localhost sshd[25206]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:37:40 localhost python3[25252]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 02:37:41 localhost python3[25272]: ansible-hostname Invoked with name=np0005532586.localdomain use=None
Nov 23 02:37:41 localhost systemd[1]: Starting Hostname Service...
Nov 23 02:37:42 localhost systemd[1]: Started Hostname Service.
Nov 23 02:37:44 localhost python3[25295]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Nov 23 02:37:44 localhost python3[25343]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible._c3_j69xtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:45 localhost python3[25373]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible._c3_j69xtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:45 localhost python3[25389]: ansible-blockinfile Invoked with create=True path=/tmp/ansible._c3_j69xtmphosts insertbefore=BOF block=192.168.122.106 np0005532584.localdomain np0005532584#012192.168.122.106 np0005532584.ctlplane.localdomain np0005532584.ctlplane#012192.168.122.107 np0005532585.localdomain np0005532585#012192.168.122.107 np0005532585.ctlplane.localdomain np0005532585.ctlplane#012192.168.122.108 np0005532586.localdomain np0005532586#012192.168.122.108 np0005532586.ctlplane.localdomain np0005532586.ctlplane#012192.168.122.103 np0005532581.localdomain np0005532581#012192.168.122.103 np0005532581.ctlplane.localdomain np0005532581.ctlplane#012192.168.122.104 np0005532582.localdomain np0005532582#012192.168.122.104 np0005532582.ctlplane.localdomain np0005532582.ctlplane#012192.168.122.105 np0005532583.localdomain np0005532583#012192.168.122.105 np0005532583.ctlplane.localdomain np0005532583.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:46 localhost python3[25405]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible._c3_j69xtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:46 localhost python3[25422]: ansible-file Invoked with path=/tmp/ansible._c3_j69xtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:48 localhost python3[25438]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:50 localhost python3[25456]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:37:54 localhost python3[25505]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:37:55 localhost python3[25550]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883474.043745-56297-138119282946032/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:56 localhost python3[25580]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:37:56 localhost python3[25598]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:37:56 localhost chronyd[766]: chronyd exiting
Nov 23 02:37:56 localhost systemd[1]: Stopping NTP client/server...
Nov 23 02:37:56 localhost systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 02:37:56 localhost systemd[1]: Stopped NTP client/server.
Nov 23 02:37:56 localhost systemd[1]: chronyd.service: Consumed 87ms CPU time, read 1.9M from disk, written 0B to disk.
Nov 23 02:37:56 localhost systemd[1]: Starting NTP client/server...
Nov 23 02:37:56 localhost chronyd[25605]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 02:37:56 localhost chronyd[25605]: Frequency -25.777 +/- 0.154 ppm read from /var/lib/chrony/drift
Nov 23 02:37:56 localhost chronyd[25605]: Loaded seccomp filter (level 2)
Nov 23 02:37:56 localhost systemd[1]: Started NTP client/server.
Nov 23 02:37:57 localhost python3[25654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:37:58 localhost python3[25697]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763883477.4331005-56486-116111621631972/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:37:58 localhost python3[25727]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:37:58 localhost systemd[1]: Reloading.
Nov 23 02:37:58 localhost systemd-sysv-generator[25754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:37:58 localhost systemd-rc-local-generator[25749]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:37:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:37:58 localhost systemd[1]: Reloading.
Nov 23 02:37:59 localhost systemd-rc-local-generator[25790]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:37:59 localhost systemd-sysv-generator[25794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:37:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:37:59 localhost systemd[1]: Starting chronyd online sources service...
Nov 23 02:37:59 localhost chronyc[25803]: 200 OK
Nov 23 02:37:59 localhost systemd[1]: chrony-online.service: Deactivated successfully.
Nov 23 02:37:59 localhost systemd[1]: Finished chronyd online sources service.
Nov 23 02:37:59 localhost python3[25820]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:37:59 localhost chronyd[25605]: System clock was stepped by 0.000000 seconds
Nov 23 02:38:00 localhost python3[25837]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:38:01 localhost chronyd[25605]: Selected source 167.160.187.12 (pool.ntp.org)
Nov 23 02:38:10 localhost python3[25854]: ansible-timezone Invoked with name=UTC hwclock=None
Nov 23 02:38:10 localhost systemd[1]: Starting Time & Date Service...
Nov 23 02:38:10 localhost systemd[1]: Started Time & Date Service.
Nov 23 02:38:12 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 23 02:38:12 localhost python3[25877]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:38:12 localhost chronyd[25605]: chronyd exiting
Nov 23 02:38:12 localhost systemd[1]: Stopping NTP client/server...
Nov 23 02:38:12 localhost systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 02:38:12 localhost systemd[1]: Stopped NTP client/server.
Nov 23 02:38:12 localhost systemd[1]: Starting NTP client/server...
Nov 23 02:38:12 localhost chronyd[25884]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 02:38:12 localhost chronyd[25884]: Frequency -25.777 +/- 0.158 ppm read from /var/lib/chrony/drift
Nov 23 02:38:12 localhost chronyd[25884]: Loaded seccomp filter (level 2)
Nov 23 02:38:12 localhost systemd[1]: Started NTP client/server.
Nov 23 02:38:17 localhost chronyd[25884]: Selected source 23.133.168.245 (pool.ntp.org)
Nov 23 02:38:40 localhost systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 02:39:45 localhost sshd[26081]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:19 localhost sshd[26083]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:19 localhost systemd[1]: Created slice User Slice of UID 1002.
Nov 23 02:40:19 localhost systemd[1]: Starting User Runtime Directory /run/user/1002...
Nov 23 02:40:19 localhost systemd-logind[761]: New session 14 of user ceph-admin.
Nov 23 02:40:19 localhost systemd[1]: Finished User Runtime Directory /run/user/1002.
Nov 23 02:40:19 localhost systemd[1]: Starting User Manager for UID 1002...
Nov 23 02:40:19 localhost sshd[26100]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:19 localhost systemd[26087]: Queued start job for default target Main User Target.
Nov 23 02:40:19 localhost systemd[26087]: Created slice User Application Slice.
Nov 23 02:40:19 localhost systemd[26087]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 02:40:19 localhost systemd[26087]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 02:40:19 localhost systemd[26087]: Reached target Paths.
Nov 23 02:40:19 localhost systemd[26087]: Reached target Timers.
Nov 23 02:40:19 localhost systemd[26087]: Starting D-Bus User Message Bus Socket...
Nov 23 02:40:19 localhost systemd[26087]: Starting Create User's Volatile Files and Directories...
Nov 23 02:40:19 localhost systemd[26087]: Listening on D-Bus User Message Bus Socket.
Nov 23 02:40:19 localhost systemd[26087]: Reached target Sockets.
Nov 23 02:40:19 localhost systemd[26087]: Finished Create User's Volatile Files and Directories.
Nov 23 02:40:19 localhost systemd[26087]: Reached target Basic System.
Nov 23 02:40:19 localhost systemd[26087]: Reached target Main User Target.
Nov 23 02:40:19 localhost systemd[26087]: Startup finished in 118ms.
Nov 23 02:40:19 localhost systemd[1]: Started User Manager for UID 1002.
Nov 23 02:40:19 localhost systemd[1]: Started Session 14 of User ceph-admin.
Nov 23 02:40:19 localhost systemd-logind[761]: New session 16 of user ceph-admin.
Nov 23 02:40:19 localhost systemd[1]: Started Session 16 of User ceph-admin.
Nov 23 02:40:19 localhost sshd[26122]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:20 localhost systemd-logind[761]: New session 17 of user ceph-admin.
Nov 23 02:40:20 localhost systemd[1]: Started Session 17 of User ceph-admin.
Nov 23 02:40:20 localhost sshd[26141]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:20 localhost systemd-logind[761]: New session 18 of user ceph-admin.
Nov 23 02:40:20 localhost systemd[1]: Started Session 18 of User ceph-admin.
Nov 23 02:40:20 localhost sshd[26160]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:20 localhost systemd-logind[761]: New session 19 of user ceph-admin.
Nov 23 02:40:20 localhost systemd[1]: Started Session 19 of User ceph-admin.
Nov 23 02:40:21 localhost sshd[26179]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:21 localhost systemd-logind[761]: New session 20 of user ceph-admin.
Nov 23 02:40:21 localhost systemd[1]: Started Session 20 of User ceph-admin.
Nov 23 02:40:21 localhost sshd[26198]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:21 localhost systemd-logind[761]: New session 21 of user ceph-admin.
Nov 23 02:40:21 localhost systemd[1]: Started Session 21 of User ceph-admin.
Nov 23 02:40:21 localhost sshd[26217]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:21 localhost systemd-logind[761]: New session 22 of user ceph-admin.
Nov 23 02:40:21 localhost systemd[1]: Started Session 22 of User ceph-admin.
Nov 23 02:40:22 localhost sshd[26236]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:22 localhost systemd-logind[761]: New session 23 of user ceph-admin.
Nov 23 02:40:22 localhost systemd[1]: Started Session 23 of User ceph-admin.
Nov 23 02:40:22 localhost sshd[26255]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:22 localhost systemd-logind[761]: New session 24 of user ceph-admin.
Nov 23 02:40:22 localhost systemd[1]: Started Session 24 of User ceph-admin.
Nov 23 02:40:23 localhost sshd[26272]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:23 localhost systemd-logind[761]: New session 25 of user ceph-admin.
Nov 23 02:40:23 localhost systemd[1]: Started Session 25 of User ceph-admin.
Nov 23 02:40:23 localhost sshd[26291]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:40:23 localhost systemd-logind[761]: New session 26 of user ceph-admin.
Nov 23 02:40:23 localhost systemd[1]: Started Session 26 of User ceph-admin.
Nov 23 02:40:24 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:43 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:44 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26507 (sysctl)
Nov 23 02:40:44 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 23 02:40:44 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 23 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:40:49 localhost kernel: VFS: idmapped mount is not enabled.
Nov 23 02:41:09 localhost podman[26648]: 
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:41:09.024880104 +0000 UTC m=+23.142386761 container create b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=)
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:40:45.912150021 +0000 UTC m=+0.029656758 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:09 localhost systemd[1]: Created slice Slice /machine.
Nov 23 02:41:09 localhost systemd[1]: Started libpod-conmon-b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb.scope.
Nov 23 02:41:09 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:41:09.138948457 +0000 UTC m=+23.256455114 container init b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, com.redhat.component=rhceph-container, version=7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:41:09.152266068 +0000 UTC m=+23.269772725 container start b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, vcs-type=git, version=7, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 02:41:09 localhost jolly_lehmann[26789]: 167 167
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:41:09.152588075 +0000 UTC m=+23.270094792 container attach b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:41:09 localhost systemd[1]: libpod-b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb.scope: Deactivated successfully.
Nov 23 02:41:09 localhost podman[26648]: 2025-11-23 07:41:09.183019719 +0000 UTC m=+23.300526416 container died b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, release=553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 02:41:09 localhost podman[26794]: 2025-11-23 07:41:09.287232783 +0000 UTC m=+0.090996435 container remove b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_lehmann, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, release=553, distribution-scope=public, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12)
Nov 23 02:41:09 localhost systemd[1]: libpod-conmon-b9db89b0717c32bf5ce31dd344052b1fe7fa7daebd2ba73c041acab5c10a22fb.scope: Deactivated successfully.
Nov 23 02:41:09 localhost podman[26813]: 
Nov 23 02:41:09 localhost podman[26813]: 2025-11-23 07:41:09.565818187 +0000 UTC m=+0.116466705 container create 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 02:41:09 localhost podman[26813]: 2025-11-23 07:41:09.495083871 +0000 UTC m=+0.045732399 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:09 localhost systemd[1]: Started libpod-conmon-6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5.scope.
Nov 23 02:41:09 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf03cb87aeee83adf0fc695064f592ebdba93d5f0f8c6df6029fc2ca51ef355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf03cb87aeee83adf0fc695064f592ebdba93d5f0f8c6df6029fc2ca51ef355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:09 localhost podman[26813]: 2025-11-23 07:41:09.707545986 +0000 UTC m=+0.258194514 container init 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.expose-services=, architecture=x86_64)
Nov 23 02:41:09 localhost podman[26813]: 2025-11-23 07:41:09.765843638 +0000 UTC m=+0.316492156 container start 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Nov 23 02:41:09 localhost podman[26813]: 2025-11-23 07:41:09.766198956 +0000 UTC m=+0.316847514 container attach 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph)
Nov 23 02:41:10 localhost systemd[1]: var-lib-containers-storage-overlay-c8dc3c2e87b395d5d9141dd6f7f4346dbe1be62c955b1e611e1d4cb4ae184722-merged.mount: Deactivated successfully.
Nov 23 02:41:10 localhost crazy_hypatia[26860]: [
Nov 23 02:41:10 localhost crazy_hypatia[26860]:    {
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "available": false,
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "ceph_device": false,
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "lsm_data": {},
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "lvs": [],
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "path": "/dev/sr0",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "rejected_reasons": [
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "Insufficient space (<5GB)",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "Has a FileSystem"
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        ],
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        "sys_api": {
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "actuators": null,
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "device_nodes": "sr0",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "human_readable_size": "482.00 KB",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "id_bus": "ata",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "model": "QEMU DVD-ROM",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "nr_requests": "2",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "partitions": {},
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "path": "/dev/sr0",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "removable": "1",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "rev": "2.5+",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "ro": "0",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "rotational": "1",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "sas_address": "",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "sas_device_handle": "",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "scheduler_mode": "mq-deadline",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "sectors": 0,
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "sectorsize": "2048",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "size": 493568.0,
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "support_discard": "0",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "type": "disk",
Nov 23 02:41:10 localhost crazy_hypatia[26860]:            "vendor": "QEMU"
Nov 23 02:41:10 localhost crazy_hypatia[26860]:        }
Nov 23 02:41:10 localhost crazy_hypatia[26860]:    }
Nov 23 02:41:10 localhost crazy_hypatia[26860]: ]
Nov 23 02:41:10 localhost systemd[1]: libpod-6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5.scope: Deactivated successfully.
Nov 23 02:41:10 localhost podman[26813]: 2025-11-23 07:41:10.60828808 +0000 UTC m=+1.158936628 container died 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.)
Nov 23 02:41:13 localhost systemd[1]: var-lib-containers-storage-overlay-aaf03cb87aeee83adf0fc695064f592ebdba93d5f0f8c6df6029fc2ca51ef355-merged.mount: Deactivated successfully.
Nov 23 02:41:13 localhost podman[28275]: 2025-11-23 07:41:13.294657579 +0000 UTC m=+2.668222775 container remove 6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_hypatia, architecture=x86_64, build-date=2025-09-24T08:57:55, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, ceph=True)
Nov 23 02:41:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:13 localhost systemd[1]: libpod-conmon-6b722e047319dd7cff87f8ec21076a941beafde062f498f61648942392215ce5.scope: Deactivated successfully.
Nov 23 02:41:13 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully.
Nov 23 02:41:13 localhost systemd[1]: Closed Process Core Dump Socket.
Nov 23 02:41:13 localhost systemd[1]: Stopping Process Core Dump Socket...
Nov 23 02:41:13 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 23 02:41:13 localhost systemd[1]: Reloading.
Nov 23 02:41:13 localhost systemd-rc-local-generator[28555]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:13 localhost systemd-sysv-generator[28558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:14 localhost systemd[1]: Reloading.
Nov 23 02:41:14 localhost systemd-sysv-generator[28590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:14 localhost systemd-rc-local-generator[28586]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:40 localhost podman[28671]: 
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.284176974 +0000 UTC m=+0.071311961 container create 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 02:41:40 localhost systemd[1]: Started libpod-conmon-4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7.scope.
Nov 23 02:41:40 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.350597143 +0000 UTC m=+0.137732140 container init 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.buildah.version=1.33.12, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7)
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.254182146 +0000 UTC m=+0.041317143 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.3613665 +0000 UTC m=+0.148501497 container start 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main)
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.361658883 +0000 UTC m=+0.148793920 container attach 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=553)
Nov 23 02:41:40 localhost systemd[1]: libpod-4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7.scope: Deactivated successfully.
Nov 23 02:41:40 localhost pensive_sinoussi[28686]: 167 167
Nov 23 02:41:40 localhost podman[28671]: 2025-11-23 07:41:40.36614546 +0000 UTC m=+0.153280467 container died 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=)
Nov 23 02:41:40 localhost podman[28691]: 2025-11-23 07:41:40.443610278 +0000 UTC m=+0.067721386 container remove 4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_sinoussi, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph)
Nov 23 02:41:40 localhost systemd[1]: libpod-conmon-4fa56c09ef1b07420b3578dd9056873ebe4b2215115671715ec9b65f158ce5a7.scope: Deactivated successfully.
Nov 23 02:41:40 localhost systemd[1]: Reloading.
Nov 23 02:41:40 localhost systemd-rc-local-generator[28728]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:40 localhost systemd-sysv-generator[28732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:40 localhost systemd[1]: Reloading.
Nov 23 02:41:40 localhost systemd-sysv-generator[28772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:40 localhost systemd-rc-local-generator[28767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:41 localhost systemd[1]: Reached target All Ceph clusters and services.
Nov 23 02:41:41 localhost systemd[1]: Reloading.
Nov 23 02:41:41 localhost systemd-rc-local-generator[28808]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:41 localhost systemd-sysv-generator[28811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:41 localhost systemd[1]: Reached target Ceph cluster 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 02:41:41 localhost systemd[1]: Reloading.
Nov 23 02:41:41 localhost systemd-sysv-generator[28845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:41 localhost systemd-rc-local-generator[28841]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:41 localhost systemd[1]: Reloading.
Nov 23 02:41:41 localhost systemd-rc-local-generator[28887]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:41 localhost systemd-sysv-generator[28892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:41 localhost systemd[1]: Created slice Slice /system/ceph-46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 02:41:41 localhost systemd[1]: Reached target System Time Set.
Nov 23 02:41:41 localhost systemd[1]: Reached target System Time Synchronized.
Nov 23 02:41:41 localhost systemd[1]: Starting Ceph crash.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 02:41:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:42 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 23 02:41:42 localhost podman[28946]: 
Nov 23 02:41:42 localhost podman[28946]: 2025-11-23 07:41:42.196117869 +0000 UTC m=+0.070786125 container create 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Nov 23 02:41:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc901e702c36e72c1cab4c325d65454b6d9ce1628e73cb3ec7a270fb35e37a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:42 localhost podman[28946]: 2025-11-23 07:41:42.165734405 +0000 UTC m=+0.040402651 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc901e702c36e72c1cab4c325d65454b6d9ce1628e73cb3ec7a270fb35e37a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc901e702c36e72c1cab4c325d65454b6d9ce1628e73cb3ec7a270fb35e37a5/merged/etc/ceph/ceph.client.crash.np0005532586.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:42 localhost podman[28946]: 2025-11-23 07:41:42.293057823 +0000 UTC m=+0.167726079 container init 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 02:41:42 localhost podman[28946]: 2025-11-23 07:41:42.303625276 +0000 UTC m=+0.178293522 container start 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7)
Nov 23 02:41:42 localhost bash[28946]: 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9
Nov 23 02:41:42 localhost systemd[1]: Started Ceph crash.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.487+0000 7f42184c1640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.487+0000 7f42184c1640 -1 AuthRegistry(0x7f4210067c70) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.488+0000 7f42184c1640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.488+0000 7f42184c1640 -1 AuthRegistry(0x7f42184c0000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.494+0000 7f4216236640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.496+0000 7f4215a35640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.497+0000 7f4216a37640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: 2025-11-23T07:41:42.497+0000 7f42184c1640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 23 02:41:42 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586[28961]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 23 02:41:45 localhost podman[29048]: 
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.625095951 +0000 UTC m=+0.072266674 container create 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 02:41:45 localhost systemd[1]: Started libpod-conmon-44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57.scope.
Nov 23 02:41:45 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.596131486 +0000 UTC m=+0.043302199 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.698023742 +0000 UTC m=+0.145194465 container init 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git)
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.70572609 +0000 UTC m=+0.152896783 container start 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.705847521 +0000 UTC m=+0.153018214 container attach 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7)
Nov 23 02:41:45 localhost angry_mccarthy[29064]: 167 167
Nov 23 02:41:45 localhost systemd[1]: libpod-44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57.scope: Deactivated successfully.
Nov 23 02:41:45 localhost podman[29048]: 2025-11-23 07:41:45.710382818 +0000 UTC m=+0.157553511 container died 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Nov 23 02:41:45 localhost podman[29069]: 2025-11-23 07:41:45.792656247 +0000 UTC m=+0.070784584 container remove 44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_mccarthy, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, name=rhceph, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:41:45 localhost systemd[1]: libpod-conmon-44d40ae1c371b2eb59b77052e69e7284c9956710ac95c688390315efc2614f57.scope: Deactivated successfully.
Nov 23 02:41:45 localhost podman[29089]: 
Nov 23 02:41:45 localhost podman[29089]: 2025-11-23 07:41:45.992997387 +0000 UTC m=+0.073882284 container create 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=553)
Nov 23 02:41:46 localhost systemd[1]: Started libpod-conmon-5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7.scope.
Nov 23 02:41:46 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:46 localhost podman[29089]: 2025-11-23 07:41:45.963457534 +0000 UTC m=+0.044342451 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:46 localhost podman[29089]: 2025-11-23 07:41:46.11595911 +0000 UTC m=+0.196844007 container init 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, release=553, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git)
Nov 23 02:41:46 localhost podman[29089]: 2025-11-23 07:41:46.125897536 +0000 UTC m=+0.206782423 container start 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:41:46 localhost podman[29089]: 2025-11-23 07:41:46.126111768 +0000 UTC m=+0.206996665 container attach 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, build-date=2025-09-24T08:57:55, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=)
Nov 23 02:41:46 localhost priceless_blackwell[29104]: --> passed data devices: 0 physical, 2 LVM
Nov 23 02:41:46 localhost priceless_blackwell[29104]: --> relative data size: 1.0
Nov 23 02:41:46 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 02:41:46 localhost systemd[1]: tmp-crun.AzUEi1.mount: Deactivated successfully.
Nov 23 02:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-5c90798471d9828fc31a114bb50dca2d4f730a69c94371ea29a7a96b3af3820d-merged.mount: Deactivated successfully.
Nov 23 02:41:46 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d3b1e1ca-55bf-4929-99f7-e0e466def09e
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 02:41:47 localhost lvm[29158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 02:41:47 localhost lvm[29158]: VG ceph_vg0 finished
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 23 02:41:47 localhost priceless_blackwell[29104]: stderr: got monmap epoch 3
Nov 23 02:41:47 localhost priceless_blackwell[29104]: --> Creating keyring file for osd.1
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 23 02:41:47 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid d3b1e1ca-55bf-4929-99f7-e0e466def09e --setuser ceph --setgroup ceph
Nov 23 02:41:49 localhost priceless_blackwell[29104]: stderr: 2025-11-23T07:41:47.791+0000 7f9c5e207a80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 23 02:41:49 localhost priceless_blackwell[29104]: stderr: 2025-11-23T07:41:47.791+0000 7f9c5e207a80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 23 02:41:49 localhost priceless_blackwell[29104]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:50 localhost priceless_blackwell[29104]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 23 02:41:50 localhost priceless_blackwell[29104]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0f3557b9-ced6-450d-839e-568c561d51bc
Nov 23 02:41:50 localhost lvm[30087]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 02:41:50 localhost lvm[30087]: VG ceph_vg1 finished
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Nov 23 02:41:50 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
Nov 23 02:41:51 localhost priceless_blackwell[29104]: stderr: got monmap epoch 3
Nov 23 02:41:51 localhost priceless_blackwell[29104]: --> Creating keyring file for osd.4
Nov 23 02:41:51 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Nov 23 02:41:51 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Nov 23 02:41:51 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid 0f3557b9-ced6-450d-839e-568c561d51bc --setuser ceph --setgroup ceph
Nov 23 02:41:53 localhost priceless_blackwell[29104]: stderr: 2025-11-23T07:41:51.344+0000 7fb69649ea80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 23 02:41:53 localhost priceless_blackwell[29104]: stderr: 2025-11-23T07:41:51.344+0000 7fb69649ea80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
Nov 23 02:41:53 localhost priceless_blackwell[29104]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 23 02:41:53 localhost priceless_blackwell[29104]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:41:53 localhost priceless_blackwell[29104]: --> ceph-volume lvm activate successful for osd ID: 4
Nov 23 02:41:53 localhost priceless_blackwell[29104]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 23 02:41:53 localhost systemd[1]: libpod-5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7.scope: Deactivated successfully.
Nov 23 02:41:53 localhost systemd[1]: libpod-5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7.scope: Consumed 3.789s CPU time.
Nov 23 02:41:54 localhost podman[30985]: 2025-11-23 07:41:54.019175919 +0000 UTC m=+0.056477424 container died 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Nov 23 02:41:54 localhost systemd[1]: var-lib-containers-storage-overlay-21501767c23320ee261a54f97d96695ea103ac97b9d1fd2af8c1c178f7b2d463-merged.mount: Deactivated successfully.
Nov 23 02:41:54 localhost podman[30985]: 2025-11-23 07:41:54.055533488 +0000 UTC m=+0.092834943 container remove 5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_blackwell, architecture=x86_64, version=7, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 02:41:54 localhost systemd[1]: libpod-conmon-5bd220580b42604265c1fe8904b369982fca2d805f971d550482d2e3db1158d7.scope: Deactivated successfully.
Nov 23 02:41:54 localhost sshd[31059]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:41:54 localhost podman[31073]: 
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.826777868 +0000 UTC m=+0.070872857 container create 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55)
Nov 23 02:41:54 localhost systemd[1]: Started libpod-conmon-3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077.scope.
Nov 23 02:41:54 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.796644197 +0000 UTC m=+0.040739226 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.900506609 +0000 UTC m=+0.144601598 container init 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, vcs-type=git, release=553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7)
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.912561901 +0000 UTC m=+0.156656890 container start 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.912919965 +0000 UTC m=+0.157014994 container attach 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, name=rhceph, RELEASE=main, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 02:41:54 localhost upbeat_fermi[31088]: 167 167
Nov 23 02:41:54 localhost systemd[1]: libpod-3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077.scope: Deactivated successfully.
Nov 23 02:41:54 localhost podman[31073]: 2025-11-23 07:41:54.919073774 +0000 UTC m=+0.163168793 container died 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main)
Nov 23 02:41:55 localhost podman[31093]: 2025-11-23 07:41:55.014837133 +0000 UTC m=+0.081424419 container remove 3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_fermi, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7)
Nov 23 02:41:55 localhost systemd[1]: libpod-conmon-3fdcd59f539db687df10c63a148ecd70262c8f07f09790d3ab4fbde3ff4a4077.scope: Deactivated successfully.
Nov 23 02:41:55 localhost systemd[1]: var-lib-containers-storage-overlay-1374d3dc1c391b5f5c2bc0e21e33527a67aedcf22a482a1fca402b3695576cd4-merged.mount: Deactivated successfully.
Nov 23 02:41:55 localhost podman[31114]: 
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.234717789 +0000 UTC m=+0.078108658 container create 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:41:55 localhost systemd[1]: Started libpod-conmon-77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5.scope.
Nov 23 02:41:55 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.203760748 +0000 UTC m=+0.047151677 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09edce7e1aaa4e3351259e3b3a032d80ac9d4393e9b1d700af2041c9979074da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09edce7e1aaa4e3351259e3b3a032d80ac9d4393e9b1d700af2041c9979074da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09edce7e1aaa4e3351259e3b3a032d80ac9d4393e9b1d700af2041c9979074da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.343846307 +0000 UTC m=+0.187237196 container init 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc.)
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.355296761 +0000 UTC m=+0.198687660 container start 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, ceph=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.355650547 +0000 UTC m=+0.199041496 container attach 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:41:55 localhost happy_wescoff[31129]: {
Nov 23 02:41:55 localhost happy_wescoff[31129]:    "1": [
Nov 23 02:41:55 localhost happy_wescoff[31129]:        {
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "devices": [
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "/dev/loop3"
Nov 23 02:41:55 localhost happy_wescoff[31129]:            ],
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_name": "ceph_lv0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_size": "7511998464",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=28CWvr-EBdM-GxMm-LkXR-x3Ld-bHiP-xcJYeO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=46550e70-79cb-5f55-bf6d-1204b97e083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d3b1e1ca-55bf-4929-99f7-e0e466def09e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_uuid": "28CWvr-EBdM-GxMm-LkXR-x3Ld-bHiP-xcJYeO",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "name": "ceph_lv0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "tags": {
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.block_uuid": "28CWvr-EBdM-GxMm-LkXR-x3Ld-bHiP-xcJYeO",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cephx_lockbox_secret": "",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cluster_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cluster_name": "ceph",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.crush_device_class": "",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.encrypted": "0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osd_fsid": "d3b1e1ca-55bf-4929-99f7-e0e466def09e",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osd_id": "1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.type": "block",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.vdo": "0"
Nov 23 02:41:55 localhost happy_wescoff[31129]:            },
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "type": "block",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "vg_name": "ceph_vg0"
Nov 23 02:41:55 localhost happy_wescoff[31129]:        }
Nov 23 02:41:55 localhost happy_wescoff[31129]:    ],
Nov 23 02:41:55 localhost happy_wescoff[31129]:    "4": [
Nov 23 02:41:55 localhost happy_wescoff[31129]:        {
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "devices": [
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "/dev/loop4"
Nov 23 02:41:55 localhost happy_wescoff[31129]:            ],
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_name": "ceph_lv1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_size": "7511998464",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=jwnlh6-fTAz-Bwkf-QXDB-AU3Q-DC6T-1Qkxsb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=46550e70-79cb-5f55-bf6d-1204b97e083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0f3557b9-ced6-450d-839e-568c561d51bc,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "lv_uuid": "jwnlh6-fTAz-Bwkf-QXDB-AU3Q-DC6T-1Qkxsb",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "name": "ceph_lv1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "path": "/dev/ceph_vg1/ceph_lv1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "tags": {
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.block_uuid": "jwnlh6-fTAz-Bwkf-QXDB-AU3Q-DC6T-1Qkxsb",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cephx_lockbox_secret": "",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cluster_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.cluster_name": "ceph",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.crush_device_class": "",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.encrypted": "0",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osd_fsid": "0f3557b9-ced6-450d-839e-568c561d51bc",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osd_id": "4",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.type": "block",
Nov 23 02:41:55 localhost happy_wescoff[31129]:                "ceph.vdo": "0"
Nov 23 02:41:55 localhost happy_wescoff[31129]:            },
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "type": "block",
Nov 23 02:41:55 localhost happy_wescoff[31129]:            "vg_name": "ceph_vg1"
Nov 23 02:41:55 localhost happy_wescoff[31129]:        }
Nov 23 02:41:55 localhost happy_wescoff[31129]:    ]
Nov 23 02:41:55 localhost happy_wescoff[31129]: }
Nov 23 02:41:55 localhost systemd[1]: libpod-77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5.scope: Deactivated successfully.
Nov 23 02:41:55 localhost podman[31114]: 2025-11-23 07:41:55.718307516 +0000 UTC m=+0.561698485 container died 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, release=553, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public)
Nov 23 02:41:55 localhost podman[31138]: 2025-11-23 07:41:55.815648095 +0000 UTC m=+0.082805077 container remove 77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_wescoff, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 02:41:55 localhost systemd[1]: libpod-conmon-77a141ccf04516ab92c05574a1992e18dba07d890f1442d34cc07b33df9efab5.scope: Deactivated successfully.
Nov 23 02:41:56 localhost systemd[1]: var-lib-containers-storage-overlay-09edce7e1aaa4e3351259e3b3a032d80ac9d4393e9b1d700af2041c9979074da-merged.mount: Deactivated successfully.
Nov 23 02:41:56 localhost podman[31223]: 
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.60788508 +0000 UTC m=+0.071452424 container create 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, release=553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=)
Nov 23 02:41:56 localhost systemd[1]: Started libpod-conmon-0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e.scope.
Nov 23 02:41:56 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.678507042 +0000 UTC m=+0.142074386 container init 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, com.redhat.component=rhceph-container)
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.580937719 +0000 UTC m=+0.044505083 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.689421239 +0000 UTC m=+0.152988583 container start 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.689752784 +0000 UTC m=+0.153320178 container attach 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, RELEASE=main)
Nov 23 02:41:56 localhost inspiring_austin[31238]: 167 167
Nov 23 02:41:56 localhost systemd[1]: libpod-0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e.scope: Deactivated successfully.
Nov 23 02:41:56 localhost podman[31223]: 2025-11-23 07:41:56.693071755 +0000 UTC m=+0.156639099 container died 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True)
Nov 23 02:41:56 localhost podman[31243]: 2025-11-23 07:41:56.768499438 +0000 UTC m=+0.066481321 container remove 0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_austin, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.33.12, release=553)
Nov 23 02:41:56 localhost systemd[1]: libpod-conmon-0cf83f979df561efd0382f4c3b4aa0b29b3641e0690dfcbbe94cc39fb71a9b1e.scope: Deactivated successfully.
Nov 23 02:41:57 localhost systemd[1]: var-lib-containers-storage-overlay-c75d7fbf0d8bc59d067435f109ca357411025822107247f3039c9668378b1e22-merged.mount: Deactivated successfully.
Nov 23 02:41:57 localhost podman[31272]: 
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.121279393 +0000 UTC m=+0.085447991 container create 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main)
Nov 23 02:41:57 localhost systemd[1]: Started libpod-conmon-9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba.scope.
Nov 23 02:41:57 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.087543247 +0000 UTC m=+0.051711865 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.252345938 +0000 UTC m=+0.216514646 container init 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.262889971 +0000 UTC m=+0.227058559 container start 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64)
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.263197594 +0000 UTC m=+0.227366182 container attach 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553)
Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test[31288]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test[31288]:                            [--no-systemd] [--no-tmpfs]
Nov 23 02:41:57 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test[31288]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 23 02:41:57 localhost systemd[1]: libpod-9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba.scope: Deactivated successfully.
Nov 23 02:41:57 localhost podman[31272]: 2025-11-23 07:41:57.488748833 +0000 UTC m=+0.452917431 container died 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, RELEASE=main)
Nov 23 02:41:57 localhost podman[31293]: 2025-11-23 07:41:57.578401565 +0000 UTC m=+0.077448719 container remove 9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate-test, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main)
Nov 23 02:41:57 localhost systemd-journald[619]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Nov 23 02:41:57 localhost systemd-journald[619]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 02:41:57 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 02:41:57 localhost systemd[1]: libpod-conmon-9fb1938893df8e219a6ad06a39160b570fca2e6bdd53d8fd724191b6a926f3ba.scope: Deactivated successfully.
Nov 23 02:41:57 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 02:41:57 localhost systemd[1]: Reloading.
Nov 23 02:41:57 localhost systemd-sysv-generator[31352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:57 localhost systemd-rc-local-generator[31347]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:58 localhost systemd[1]: var-lib-containers-storage-overlay-e811aedaae908b66260ac3c54c3e3e4ec51fff5685b60a64b25c84ff7a1590c1-merged.mount: Deactivated successfully.
Nov 23 02:41:58 localhost systemd[1]: Reloading.
Nov 23 02:41:58 localhost systemd-rc-local-generator[31386]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:41:58 localhost systemd-sysv-generator[31390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:41:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:41:58 localhost systemd[1]: Starting Ceph osd.1 for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 02:41:58 localhost podman[31454]: 
Nov 23 02:41:58 localhost podman[31454]: 2025-11-23 07:41:58.664506941 +0000 UTC m=+0.058676172 container create 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.33.12, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:41:58 localhost systemd[1]: Started libcrun container.
Nov 23 02:41:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:58 localhost podman[31454]: 2025-11-23 07:41:58.633428208 +0000 UTC m=+0.027597439 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:41:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:41:58 localhost podman[31454]: 2025-11-23 07:41:58.786223577 +0000 UTC m=+0.180392808 container init 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55)
Nov 23 02:41:58 localhost podman[31454]: 2025-11-23 07:41:58.79517721 +0000 UTC m=+0.189346441 container start 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, release=553, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 02:41:58 localhost podman[31454]: 2025-11-23 07:41:58.795642617 +0000 UTC m=+0.189811858 container attach 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, release=553, ceph=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:59 localhost bash[31454]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 23 02:41:59 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate[31467]: --> ceph-volume raw activate successful for osd ID: 1
Nov 23 02:41:59 localhost bash[31454]: --> ceph-volume raw activate successful for osd ID: 1
Nov 23 02:41:59 localhost systemd[1]: libpod-582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2.scope: Deactivated successfully.
Nov 23 02:41:59 localhost podman[31454]: 2025-11-23 07:41:59.549940702 +0000 UTC m=+0.944109933 container died 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-14e9754ee5b6876f967e612edceaa97eb297ec734b84ef0d0266cbdd9d4c18f2-merged.mount: Deactivated successfully.
Nov 23 02:41:59 localhost podman[31592]: 2025-11-23 07:41:59.644635428 +0000 UTC m=+0.081927486 container remove 582adf8436e845a503eb9e49c90254483f8fb0ba04a8b20841457123091213d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1-activate, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True)
Nov 23 02:41:59 localhost podman[31650]: 
Nov 23 02:41:59 localhost podman[31650]: 2025-11-23 07:41:59.972259145 +0000 UTC m=+0.072498106 container create e6172d57547ac496618726c03901667c957c3e0e29a3177e0edc852692f5a5de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdf42aa93d9bcce2e401f0c1eb7140d065337726ccd0c3d5a01c59800ac0e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdf42aa93d9bcce2e401f0c1eb7140d065337726ccd0c3d5a01c59800ac0e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:00 localhost podman[31650]: 2025-11-23 07:41:59.943544762 +0000 UTC m=+0.043783753 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdf42aa93d9bcce2e401f0c1eb7140d065337726ccd0c3d5a01c59800ac0e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdf42aa93d9bcce2e401f0c1eb7140d065337726ccd0c3d5a01c59800ac0e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacdf42aa93d9bcce2e401f0c1eb7140d065337726ccd0c3d5a01c59800ac0e7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:00 localhost podman[31650]: 2025-11-23 07:42:00.083610221 +0000 UTC m=+0.183849172 container init e6172d57547ac496618726c03901667c957c3e0e29a3177e0edc852692f5a5de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, release=553)
Nov 23 02:42:00 localhost podman[31650]: 2025-11-23 07:42:00.095917237 +0000 UTC m=+0.196156188 container start e6172d57547ac496618726c03901667c957c3e0e29a3177e0edc852692f5a5de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55)
Nov 23 02:42:00 localhost bash[31650]: e6172d57547ac496618726c03901667c957c3e0e29a3177e0edc852692f5a5de
Nov 23 02:42:00 localhost systemd[1]: Started Ceph osd.1 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 02:42:00 localhost ceph-osd[31668]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 02:42:00 localhost ceph-osd[31668]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Nov 23 02:42:00 localhost ceph-osd[31668]: pidfile_write: ignore empty --pid-file
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:00 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:00 localhost ceph-osd[31668]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 02:42:00 localhost systemd[1]: tmp-crun.Qti5gq.mount: Deactivated successfully.
Nov 23 02:42:00 localhost ceph-osd[31668]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 23 02:42:00 localhost ceph-osd[31668]: load: jerasure load: lrc 
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:00 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 02:42:00 localhost podman[31758]: 
Nov 23 02:42:00 localhost podman[31758]: 2025-11-23 07:42:00.93877386 +0000 UTC m=+0.071967220 container create be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, vendor=Red Hat, Inc., release=553, RELEASE=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:00 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:00 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 02:42:00 localhost systemd[1]: Started libpod-conmon-be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c.scope.
Nov 23 02:42:01 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:01 localhost podman[31758]: 2025-11-23 07:42:00.911564636 +0000 UTC m=+0.044758006 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:01 localhost podman[31758]: 2025-11-23 07:42:01.01404992 +0000 UTC m=+0.147243280 container init be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public)
Nov 23 02:42:01 localhost podman[31758]: 2025-11-23 07:42:01.022684709 +0000 UTC m=+0.155878079 container start be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=553, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 02:42:01 localhost podman[31758]: 2025-11-23 07:42:01.022947283 +0000 UTC m=+0.156140653 container attach be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:42:01 localhost serene_nightingale[31778]: 167 167
Nov 23 02:42:01 localhost systemd[1]: libpod-be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c.scope: Deactivated successfully.
Nov 23 02:42:01 localhost podman[31758]: 2025-11-23 07:42:01.027847254 +0000 UTC m=+0.161040674 container died be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main)
Nov 23 02:42:01 localhost podman[31783]: 2025-11-23 07:42:01.126246517 +0000 UTC m=+0.083042119 container remove be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_nightingale, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:01 localhost systemd[1]: libpod-conmon-be08fcf86d973ef212d78888779ea8607eec4b34f61d5ae77a37446ac8bc6a2c.scope: Deactivated successfully.
Nov 23 02:42:01 localhost ceph-osd[31668]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9140e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs mount
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs mount shared_bdev_used = 0
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: RocksDB version: 7.9.2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Git sha 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DB SUMMARY
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DB Session ID:  HQEBS6QQ9GUCICJN50RD
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: CURRENT file:  CURRENT
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.error_if_exists: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.create_if_missing: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                     Options.env: 0x5606e93d4cb0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                Options.info_log: 0x5606ea0de380
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.statistics: (nil)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.use_fsync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.db_log_dir: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.write_buffer_manager: 0x5606e912b400
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.unordered_write: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.row_cache: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.wal_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.two_write_queues: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.wal_compression: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.atomic_flush: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_background_jobs: 4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_background_compactions: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_subcompactions: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.max_open_files: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Compression algorithms supported:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZSTD supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kXpressCompression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZlibCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de540)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9118850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea0de760)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8aa3505a-9776-4fc8-9947-b7bb33c218e1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721267771, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721268055, "job": 1, "event": "recovery_finished"}
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 23 02:42:01 localhost ceph-osd[31668]: freelist init
Nov 23 02:42:01 localhost ceph-osd[31668]: freelist _read_cfg
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs umount
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) close
Nov 23 02:42:01 localhost podman[32010]: 
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.42580312 +0000 UTC m=+0.074581362 container create dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:42:01 localhost systemd[1]: Started libpod-conmon-dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4.scope.
Nov 23 02:42:01 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.394305612 +0000 UTC m=+0.043083894 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.544907324 +0000 UTC m=+0.193685556 container init dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 23 02:42:01 localhost ceph-osd[31668]: bdev(0x5606e9141180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs mount
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 23 02:42:01 localhost ceph-osd[31668]: bluefs mount shared_bdev_used = 4718592
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.557894989 +0000 UTC m=+0.206673231 container start dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, release=553, architecture=x86_64)
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.558351584 +0000 UTC m=+0.207129836 container attach dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: RocksDB version: 7.9.2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Git sha 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DB SUMMARY
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DB Session ID:  HQEBS6QQ9GUCICJN50RC
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: CURRENT file:  CURRENT
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.error_if_exists: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.create_if_missing: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                     Options.env: 0x5606ea3ba460
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                Options.info_log: 0x5606ea0df480
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.statistics: (nil)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.use_fsync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.db_log_dir: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.write_buffer_manager: 0x5606e912b540
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.unordered_write: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.row_cache: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                              Options.wal_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.two_write_queues: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.wal_compression: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.atomic_flush: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_background_jobs: 4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_background_compactions: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_subcompactions: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.max_open_files: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Compression algorithms supported:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZSTD supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kXpressCompression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kZlibCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284e00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e91182d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9119610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9119610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5606ea284bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5606e9119610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8aa3505a-9776-4fc8-9947-b7bb33c218e1
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721588064, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721592705, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8aa3505a-9776-4fc8-9947-b7bb33c218e1", "db_session_id": "HQEBS6QQ9GUCICJN50RC", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721596578, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8aa3505a-9776-4fc8-9947-b7bb33c218e1", "db_session_id": "HQEBS6QQ9GUCICJN50RC", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721600099, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883721, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8aa3505a-9776-4fc8-9947-b7bb33c218e1", "db_session_id": "HQEBS6QQ9GUCICJN50RC", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883721604280, "job": 1, "event": "recovery_finished"}
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 23 02:42:01 localhost systemd[1]: tmp-crun.hga0ce.mount: Deactivated successfully.
Nov 23 02:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-91c5574ea847bfe0398464f5454b02ed4dbe39a01f2cb2eadd9de32092c857f7-merged.mount: Deactivated successfully.
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5606ea290380
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: DB pointer 0x5606ea035a00
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 23 02:42:01 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 02:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 460.80 MB usag
Nov 23 02:42:01 localhost ceph-osd[31668]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 23 02:42:01 localhost ceph-osd[31668]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 23 02:42:01 localhost ceph-osd[31668]: _get_class not permitted to load lua
Nov 23 02:42:01 localhost ceph-osd[31668]: _get_class not permitted to load sdk
Nov 23 02:42:01 localhost ceph-osd[31668]: _get_class not permitted to load test_remote_reads
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 load_pgs
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 load_pgs opened 0 pgs
Nov 23 02:42:01 localhost ceph-osd[31668]: osd.1 0 log_to_monitors true
Nov 23 02:42:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1[31664]: 2025-11-23T07:42:01.641+0000 7f7e50061a80 -1 osd.1 0 log_to_monitors true
Nov 23 02:42:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test[32026]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 23 02:42:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test[32026]:                            [--no-systemd] [--no-tmpfs]
Nov 23 02:42:01 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test[32026]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 23 02:42:01 localhost systemd[1]: libpod-dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4.scope: Deactivated successfully.
Nov 23 02:42:01 localhost podman[32010]: 2025-11-23 07:42:01.787916663 +0000 UTC m=+0.436694905 container died dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_BRANCH=main, release=553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-5dcfad0f1eb1c41567217170ce4a230688a29b933d8eef74670cb70be78eb4c3-merged.mount: Deactivated successfully.
Nov 23 02:42:01 localhost podman[32248]: 2025-11-23 07:42:01.855536167 +0000 UTC m=+0.058557521 container remove dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate-test, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main)
Nov 23 02:42:01 localhost systemd[1]: libpod-conmon-dfdedde2722fb6b65d8aec268450431eb467f2d129dacce7df85e94200408df4.scope: Deactivated successfully.
Nov 23 02:42:02 localhost systemd[1]: Reloading.
Nov 23 02:42:02 localhost systemd-rc-local-generator[32303]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:42:02 localhost systemd-sysv-generator[32310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:42:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:42:02 localhost systemd[1]: Reloading.
Nov 23 02:42:02 localhost systemd-rc-local-generator[32348]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:42:02 localhost systemd-sysv-generator[32352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:42:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:42:02 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 23 02:42:02 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 23 02:42:02 localhost systemd[1]: Starting Ceph osd.4 for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 02:42:03 localhost podman[32408]: 
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:03.016058162 +0000 UTC m=+0.077714463 container create c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, vcs-type=git, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:03 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:02.985606787 +0000 UTC m=+0.047263098 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:03.165982745 +0000 UTC m=+0.227639016 container init c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:03 localhost systemd[1]: tmp-crun.mcZnOG.mount: Deactivated successfully.
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:03.177825285 +0000 UTC m=+0.239481556 container start c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main)
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:03.177987797 +0000 UTC m=+0.239644078 container attach c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container)
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 done with init, starting boot process
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 start_boot
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 23 02:42:03 localhost ceph-osd[31668]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:42:03 localhost bash[32408]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Nov 23 02:42:03 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate[32422]: --> ceph-volume raw activate successful for osd ID: 4
Nov 23 02:42:03 localhost bash[32408]: --> ceph-volume raw activate successful for osd ID: 4
Nov 23 02:42:03 localhost systemd[1]: libpod-c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3.scope: Deactivated successfully.
Nov 23 02:42:03 localhost podman[32408]: 2025-11-23 07:42:03.921353944 +0000 UTC m=+0.983010265 container died c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-5cd540a19f18ec374874a59b37807b459c511c801346ec303da2711c7ea6e7bc-merged.mount: Deactivated successfully.
Nov 23 02:42:04 localhost podman[32538]: 2025-11-23 07:42:04.026540712 +0000 UTC m=+0.094068039 container remove c350b90090ca8442aa47044d27d722d8accc2ba84aef4c2fbd8d33e3968ca0d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4-activate, name=rhceph, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:04 localhost podman[32597]: 
Nov 23 02:42:04 localhost podman[32597]: 2025-11-23 07:42:04.390561259 +0000 UTC m=+0.084764232 container create cc8cb66f3cbfc53fb331a2c0ad445abc29b42efdf6816a7758c0009a3e40df97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, GIT_CLEAN=True)
Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5389fed4254e1dc42c0150997ceddfa022ef8ee2c51c81462b8810795e3f41d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:04 localhost podman[32597]: 2025-11-23 07:42:04.35740695 +0000 UTC m=+0.051609923 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5389fed4254e1dc42c0150997ceddfa022ef8ee2c51c81462b8810795e3f41d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5389fed4254e1dc42c0150997ceddfa022ef8ee2c51c81462b8810795e3f41d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5389fed4254e1dc42c0150997ceddfa022ef8ee2c51c81462b8810795e3f41d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5389fed4254e1dc42c0150997ceddfa022ef8ee2c51c81462b8810795e3f41d/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:04 localhost podman[32597]: 2025-11-23 07:42:04.517623523 +0000 UTC m=+0.211826506 container init cc8cb66f3cbfc53fb331a2c0ad445abc29b42efdf6816a7758c0009a3e40df97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4, release=553, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 02:42:04 localhost podman[32597]: 2025-11-23 07:42:04.529955119 +0000 UTC m=+0.224158182 container start cc8cb66f3cbfc53fb331a2c0ad445abc29b42efdf6816a7758c0009a3e40df97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:42:04 localhost bash[32597]: cc8cb66f3cbfc53fb331a2c0ad445abc29b42efdf6816a7758c0009a3e40df97
Nov 23 02:42:04 localhost systemd[1]: Started Ceph osd.4 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 02:42:04 localhost ceph-osd[32615]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 02:42:04 localhost ceph-osd[32615]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Nov 23 02:42:04 localhost ceph-osd[32615]: pidfile_write: ignore empty --pid-file
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:04 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:04 localhost ceph-osd[32615]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) close
Nov 23 02:42:04 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) close
Nov 23 02:42:05 localhost ceph-osd[32615]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
Nov 23 02:42:05 localhost ceph-osd[32615]: load: jerasure load: lrc 
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) close
Nov 23 02:42:05 localhost podman[32701]: 
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.379880102 +0000 UTC m=+0.073081524 container create 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) close
Nov 23 02:42:05 localhost systemd[1]: Started libpod-conmon-4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7.scope.
Nov 23 02:42:05 localhost systemd[1]: tmp-crun.8I6jNG.mount: Deactivated successfully.
Nov 23 02:42:05 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.352404505 +0000 UTC m=+0.045605927 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.461434992 +0000 UTC m=+0.154636414 container init 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main)
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.470925831 +0000 UTC m=+0.164127263 container start 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, GIT_BRANCH=main, io.openshift.expose-services=, version=7, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553)
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.471166414 +0000 UTC m=+0.164367846 container attach 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12)
Nov 23 02:42:05 localhost distracted_zhukovsky[32720]: 167 167
Nov 23 02:42:05 localhost systemd[1]: libpod-4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7.scope: Deactivated successfully.
Nov 23 02:42:05 localhost podman[32701]: 2025-11-23 07:42:05.478044372 +0000 UTC m=+0.171245804 container died 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph)
Nov 23 02:42:05 localhost podman[32725]: 2025-11-23 07:42:05.566883324 +0000 UTC m=+0.083143411 container remove 4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_zhukovsky, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64, name=rhceph, ceph=True)
Nov 23 02:42:05 localhost systemd[1]: libpod-conmon-4b78bddb75aab9e15bd42dc6d137ab2c5e819a3eb7fe72787464d1cf033fa5a7.scope: Deactivated successfully.
Nov 23 02:42:05 localhost ceph-osd[32615]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 23 02:42:05 localhost ceph-osd[32615]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e472e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs mount
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs mount shared_bdev_used = 0
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: RocksDB version: 7.9.2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Git sha 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DB SUMMARY
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DB Session ID:  VUZOTMPYLFHGN1LMCV94
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: CURRENT file:  CURRENT
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.error_if_exists: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.create_if_missing: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                     Options.env: 0x55720e706cb0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                Options.info_log: 0x55720f3f4460
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.statistics: (nil)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.use_fsync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.db_log_dir: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.write_buffer_manager: 0x55720e45c140
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.unordered_write: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.row_cache: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.wal_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.two_write_queues: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.wal_compression: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.atomic_flush: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_background_jobs: 4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_background_compactions: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_subcompactions: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.max_open_files: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Compression algorithms supported:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZSTD supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kXpressCompression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZlibCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4620)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a850#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f3f4840)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c674cddd-e042-4ee6-a0fa-0fbe12392b9d
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725708152, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725708570, "job": 1, "event": "recovery_finished"}
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000
Nov 23 02:42:05 localhost ceph-osd[32615]: freelist init
Nov 23 02:42:05 localhost ceph-osd[32615]: freelist _read_cfg
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs umount
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) close
Nov 23 02:42:05 localhost podman[32941]: 
Nov 23 02:42:05 localhost podman[32941]: 2025-11-23 07:42:05.783686121 +0000 UTC m=+0.073060413 container create f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.903 iops: 7655.241 elapsed_sec: 0.392
Nov 23 02:42:05 localhost ceph-osd[31668]: log_channel(cluster) log [WRN] : OSD bench result of 7655.240859 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 0 waiting for initial osdmap
Nov 23 02:42:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1[31664]: 2025-11-23T07:42:05.800+0000 7f7e4c7f5640 -1 osd.1 0 waiting for initial osdmap
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Nov 23 02:42:05 localhost systemd[1]: Started libpod-conmon-f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff.scope.
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 set_numa_affinity not setting numa affinity
Nov 23 02:42:05 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-1[31664]: 2025-11-23T07:42:05.833+0000 7f7e4760a640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 02:42:05 localhost ceph-osd[31668]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 23 02:42:05 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:05 localhost podman[32941]: 2025-11-23 07:42:05.75114621 +0000 UTC m=+0.040520502 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1201ac7a90d2d7e609a0426b8bdd74e9fe8ca9778cdd884b830933b8257c5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1201ac7a90d2d7e609a0426b8bdd74e9fe8ca9778cdd884b830933b8257c5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1201ac7a90d2d7e609a0426b8bdd74e9fe8ca9778cdd884b830933b8257c5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:05 localhost podman[32941]: 2025-11-23 07:42:05.89759101 +0000 UTC m=+0.186965282 container init f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55)
Nov 23 02:42:05 localhost podman[32941]: 2025-11-23 07:42:05.909273707 +0000 UTC m=+0.198647989 container start f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, com.redhat.component=rhceph-container, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 02:42:05 localhost podman[32941]: 2025-11-23 07:42:05.90951853 +0000 UTC m=+0.198892882 container attach f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Nov 23 02:42:05 localhost ceph-osd[32615]: bdev(0x55720e473180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs mount
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 23 02:42:05 localhost ceph-osd[32615]: bluefs mount shared_bdev_used = 4718592
Nov 23 02:42:05 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: RocksDB version: 7.9.2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Git sha 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DB SUMMARY
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DB Session ID:  VUZOTMPYLFHGN1LMCV95
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: CURRENT file:  CURRENT
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.error_if_exists: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.create_if_missing: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                     Options.env: 0x55720e4aea10
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                Options.info_log: 0x55720f4b6000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.statistics: (nil)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.use_fsync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.db_log_dir: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                                 Options.wal_dir: db.wal
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.write_buffer_manager: 0x55720e45d5e0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.unordered_write: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.row_cache: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                              Options.wal_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.two_write_queues: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.wal_compression: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.atomic_flush: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_background_jobs: 4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_background_compactions: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_subcompactions: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.max_open_files: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Compression algorithms supported:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZSTD supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kXpressCompression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kZlibCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b60a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44a2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b7520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44b610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b7520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44b610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:05 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:           Options.merge_operator: None
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter: None
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55720f4b7520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55720e44b610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:        Options.write_buffer_size: 16777216
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:  Options.max_write_buffer_number: 64
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.compression: LZ4
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:             Options.num_levels: 7
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                           Options.bloom_locality: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                               Options.ttl: 2592000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                       Options.enable_blob_files: false
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                           Options.min_blob_size: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c674cddd-e042-4ee6-a0fa-0fbe12392b9d
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725976915, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725981603, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883725, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c674cddd-e042-4ee6-a0fa-0fbe12392b9d", "db_session_id": "VUZOTMPYLFHGN1LMCV95", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725989594, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883725, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c674cddd-e042-4ee6-a0fa-0fbe12392b9d", "db_session_id": "VUZOTMPYLFHGN1LMCV95", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883725994614, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763883725, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c674cddd-e042-4ee6-a0fa-0fbe12392b9d", "db_session_id": "VUZOTMPYLFHGN1LMCV95", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763883726000584, "job": 1, "event": "recovery_finished"}
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55720e4fa700
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: DB pointer 0x55720f34da00
Nov 23 02:42:06 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 23 02:42:06 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4
Nov 23 02:42:06 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 02:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 23 02:42:06 localhost ceph-osd[32615]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 23 02:42:06 localhost ceph-osd[32615]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 23 02:42:06 localhost ceph-osd[32615]: _get_class not permitted to load lua
Nov 23 02:42:06 localhost ceph-osd[32615]: _get_class not permitted to load sdk
Nov 23 02:42:06 localhost ceph-osd[32615]: _get_class not permitted to load test_remote_reads
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 load_pgs
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 load_pgs opened 0 pgs
Nov 23 02:42:06 localhost ceph-osd[32615]: osd.4 0 log_to_monitors true
Nov 23 02:42:06 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4[32611]: 2025-11-23T07:42:06.034+0000 7f9d7134ea80 -1 osd.4 0 log_to_monitors true
Nov 23 02:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-40e403959648f50b3b3a80aad8b476ae1001a28017e0642cf76025a8275e6ce6-merged.mount: Deactivated successfully.
Nov 23 02:42:06 localhost charming_nash[32958]: {
Nov 23 02:42:06 localhost charming_nash[32958]:    "0f3557b9-ced6-450d-839e-568c561d51bc": {
Nov 23 02:42:06 localhost charming_nash[32958]:        "ceph_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b",
Nov 23 02:42:06 localhost charming_nash[32958]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 23 02:42:06 localhost charming_nash[32958]:        "osd_id": 4,
Nov 23 02:42:06 localhost charming_nash[32958]:        "osd_uuid": "0f3557b9-ced6-450d-839e-568c561d51bc",
Nov 23 02:42:06 localhost charming_nash[32958]:        "type": "bluestore"
Nov 23 02:42:06 localhost charming_nash[32958]:    },
Nov 23 02:42:06 localhost charming_nash[32958]:    "d3b1e1ca-55bf-4929-99f7-e0e466def09e": {
Nov 23 02:42:06 localhost charming_nash[32958]:        "ceph_fsid": "46550e70-79cb-5f55-bf6d-1204b97e083b",
Nov 23 02:42:06 localhost charming_nash[32958]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 23 02:42:06 localhost charming_nash[32958]:        "osd_id": 1,
Nov 23 02:42:06 localhost charming_nash[32958]:        "osd_uuid": "d3b1e1ca-55bf-4929-99f7-e0e466def09e",
Nov 23 02:42:06 localhost charming_nash[32958]:        "type": "bluestore"
Nov 23 02:42:06 localhost charming_nash[32958]:    }
Nov 23 02:42:06 localhost charming_nash[32958]: }
Nov 23 02:42:06 localhost systemd[1]: libpod-f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff.scope: Deactivated successfully.
Nov 23 02:42:06 localhost podman[32941]: 2025-11-23 07:42:06.562373605 +0000 UTC m=+0.851747937 container died f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, release=553, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.buildah.version=1.33.12, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Nov 23 02:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-f1201ac7a90d2d7e609a0426b8bdd74e9fe8ca9778cdd884b830933b8257c5ad-merged.mount: Deactivated successfully.
Nov 23 02:42:06 localhost podman[33211]: 2025-11-23 07:42:06.660335182 +0000 UTC m=+0.085440781 container remove f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_nash, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 02:42:06 localhost systemd[1]: libpod-conmon-f7e52adc6703a33c74c9dff5c01fd4416d75169bdfc158e62a07f038b27470ff.scope: Deactivated successfully.
Nov 23 02:42:06 localhost ceph-osd[31668]: osd.1 13 state: booting -> active
Nov 23 02:42:06 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 23 02:42:06 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 done with init, starting boot process
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 start_boot
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 23 02:42:07 localhost ceph-osd[32615]: osd.4 0  bench count 12288000 bsize 4 KiB
Nov 23 02:42:07 localhost ceph-osd[31668]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 23 02:42:07 localhost ceph-osd[31668]: osd.1 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 23 02:42:07 localhost ceph-osd[31668]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 23 02:42:08 localhost systemd[1]: tmp-crun.q1cckc.mount: Deactivated successfully.
Nov 23 02:42:08 localhost podman[33339]: 2025-11-23 07:42:08.397147304 +0000 UTC m=+0.111042653 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55)
Nov 23 02:42:08 localhost podman[33339]: 2025-11-23 07:42:08.544012398 +0000 UTC m=+0.257907787 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, version=7, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12)
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.860 iops: 7900.034 elapsed_sec: 0.380
Nov 23 02:42:10 localhost ceph-osd[32615]: log_channel(cluster) log [WRN] : OSD bench result of 7900.034335 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 0 waiting for initial osdmap
Nov 23 02:42:10 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4[32611]: 2025-11-23T07:42:10.031+0000 7f9d6dae2640 -1 osd.4 0 waiting for initial osdmap
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 check_osdmap_features require_osd_release unknown -> reef
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 set_numa_affinity not setting numa affinity
Nov 23 02:42:10 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-osd-4[32611]: 2025-11-23T07:42:10.052+0000 7f9d688f7640 -1 osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 16 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 23 02:42:10 localhost podman[33533]: 
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.558047042 +0000 UTC m=+0.076491797 container create 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, RELEASE=main, vendor=Red Hat, Inc.)
Nov 23 02:42:10 localhost systemd[1]: Started libpod-conmon-73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4.scope.
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.529148537 +0000 UTC m=+0.047593322 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:10 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.646095184 +0000 UTC m=+0.164539929 container init 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, name=rhceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Nov 23 02:42:10 localhost systemd[1]: tmp-crun.aAcvsH.mount: Deactivated successfully.
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.662812345 +0000 UTC m=+0.181257100 container start 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.663146549 +0000 UTC m=+0.181591294 container attach 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.component=rhceph-container, ceph=True)
Nov 23 02:42:10 localhost friendly_wu[33549]: 167 167
Nov 23 02:42:10 localhost systemd[1]: libpod-73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4.scope: Deactivated successfully.
Nov 23 02:42:10 localhost podman[33533]: 2025-11-23 07:42:10.66723457 +0000 UTC m=+0.185679335 container died 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55)
Nov 23 02:42:10 localhost podman[33554]: 2025-11-23 07:42:10.773597474 +0000 UTC m=+0.091009690 container remove 73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_wu, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, ceph=True)
Nov 23 02:42:10 localhost systemd[1]: libpod-conmon-73f167a92f4ee25769cb90dd49aac406056d86a0bb42be7e5eda8aa1d58005f4.scope: Deactivated successfully.
Nov 23 02:42:10 localhost ceph-osd[32615]: osd.4 17 state: booting -> active
Nov 23 02:42:10 localhost podman[33578]: 
Nov 23 02:42:10 localhost podman[33578]: 2025-11-23 07:42:10.995608968 +0000 UTC m=+0.081050555 container create e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, version=7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git)
Nov 23 02:42:11 localhost systemd[1]: Started libpod-conmon-e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657.scope.
Nov 23 02:42:11 localhost systemd[1]: Started libcrun container.
Nov 23 02:42:11 localhost podman[33578]: 2025-11-23 07:42:10.965898292 +0000 UTC m=+0.051339879 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc98c3dc30c8aad91fe0d608532d1bf7f4339bc4fe4094acf27e5aec5455840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc98c3dc30c8aad91fe0d608532d1bf7f4339bc4fe4094acf27e5aec5455840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc98c3dc30c8aad91fe0d608532d1bf7f4339bc4fe4094acf27e5aec5455840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 02:42:11 localhost podman[33578]: 2025-11-23 07:42:11.091983034 +0000 UTC m=+0.177424632 container init e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, name=rhceph, version=7)
Nov 23 02:42:11 localhost podman[33578]: 2025-11-23 07:42:11.100794676 +0000 UTC m=+0.186236273 container start e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Nov 23 02:42:11 localhost podman[33578]: 2025-11-23 07:42:11.1011133 +0000 UTC m=+0.186554887 container attach e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, vcs-type=git, io.openshift.expose-services=, version=7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 02:42:11 localhost systemd[1]: var-lib-containers-storage-overlay-523e0e74bbd8480cbe963c364e15e6a7021c09c5ee1ed09fba607d8d75aa94a4-merged.mount: Deactivated successfully.
Nov 23 02:42:11 localhost ceph-osd[32615]: osd.4 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=17) [2,4,3] r=1 lpr=17 pi=[14,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 02:42:12 localhost fervent_matsumoto[33593]: [
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:    {
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "available": false,
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "ceph_device": false,
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "lsm_data": {},
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "lvs": [],
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "path": "/dev/sr0",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "rejected_reasons": [
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "Has a FileSystem",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "Insufficient space (<5GB)"
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        ],
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        "sys_api": {
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "actuators": null,
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "device_nodes": "sr0",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "human_readable_size": "482.00 KB",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "id_bus": "ata",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "model": "QEMU DVD-ROM",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "nr_requests": "2",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "partitions": {},
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "path": "/dev/sr0",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "removable": "1",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "rev": "2.5+",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "ro": "0",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "rotational": "1",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "sas_address": "",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "sas_device_handle": "",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "scheduler_mode": "mq-deadline",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "sectors": 0,
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "sectorsize": "2048",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "size": 493568.0,
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "support_discard": "0",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "type": "disk",
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:            "vendor": "QEMU"
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:        }
Nov 23 02:42:12 localhost fervent_matsumoto[33593]:    }
Nov 23 02:42:12 localhost fervent_matsumoto[33593]: ]
Nov 23 02:42:12 localhost systemd[1]: libpod-e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657.scope: Deactivated successfully.
Nov 23 02:42:12 localhost podman[33578]: 2025-11-23 07:42:12.046057333 +0000 UTC m=+1.131498900 container died e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, build-date=2025-09-24T08:57:55, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 02:42:12 localhost systemd[1]: tmp-crun.qdpR5H.mount: Deactivated successfully.
Nov 23 02:42:12 localhost systemd[1]: var-lib-containers-storage-overlay-1cc98c3dc30c8aad91fe0d608532d1bf7f4339bc4fe4094acf27e5aec5455840-merged.mount: Deactivated successfully.
Nov 23 02:42:12 localhost podman[34890]: 2025-11-23 07:42:12.193252879 +0000 UTC m=+0.139853665 container remove e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_matsumoto, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc.)
Nov 23 02:42:12 localhost systemd[1]: libpod-conmon-e61150984c41560e6c7f8325521f8601e870ff9f270262c1de3c7dd0f2f84657.scope: Deactivated successfully.
Nov 23 02:42:21 localhost systemd[26087]: Starting Mark boot as successful...
Nov 23 02:42:21 localhost systemd[1]: tmp-crun.CfymA3.mount: Deactivated successfully.
Nov 23 02:42:21 localhost systemd[26087]: Finished Mark boot as successful.
Nov 23 02:42:21 localhost podman[35021]: 2025-11-23 07:42:21.032843097 +0000 UTC m=+0.106718590 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 02:42:21 localhost podman[35021]: 2025-11-23 07:42:21.135142708 +0000 UTC m=+0.209018191 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, ceph=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc.)
Nov 23 02:42:46 localhost sshd[35101]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:43:22 localhost podman[35202]: 2025-11-23 07:43:22.929252914 +0000 UTC m=+0.086978997 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 02:43:23 localhost podman[35202]: 2025-11-23 07:43:23.026888833 +0000 UTC m=+0.184614946 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, architecture=x86_64, vcs-type=git, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public)
Nov 23 02:43:31 localhost systemd[1]: session-13.scope: Deactivated successfully.
Nov 23 02:43:31 localhost systemd[1]: session-13.scope: Consumed 21.051s CPU time.
Nov 23 02:43:31 localhost systemd-logind[761]: Session 13 logged out. Waiting for processes to exit.
Nov 23 02:43:31 localhost systemd-logind[761]: Removed session 13.
Nov 23 02:44:06 localhost sshd[35346]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:45:15 localhost sshd[35424]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:45:34 localhost systemd[26087]: Created slice User Background Tasks Slice.
Nov 23 02:45:34 localhost systemd[26087]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 02:45:34 localhost systemd[26087]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 02:46:12 localhost sshd[35504]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:46:56 localhost sshd[35582]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:46:56 localhost systemd-logind[761]: New session 27 of user zuul.
Nov 23 02:46:56 localhost systemd[1]: Started Session 27 of User zuul.
Nov 23 02:46:56 localhost python3[35630]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 02:46:57 localhost python3[35675]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 02:46:58 localhost python3[35695]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 02:46:58 localhost python3[35751]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:46:59 localhost python3[35794]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763884018.373347-66725-74235905859861/source _original_basename=tmp5_r7uwq5 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:46:59 localhost python3[35824]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:46:59 localhost python3[35840]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:00 localhost python3[35856]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:00 localhost python3[35872]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUOGMsJ/AVVgY0xZSxHG1Oo8GuV12NJwOZUmjQnvkVRDmS001A30AXXR0N8dst9ZQnVZ7t0kBrbCnuEI8SNNpeziiyScPCPK73L2zyV8Js+qKXswkPpolOCRy92kOph1OZYuXdhUodUSdBoJ4mwf2s5nJhAJmH6XvfiqHUaCRd9Gp9NgU9FvjG41eO7BwkjRpKTg2jZAy21PGLDeWxRI5qxEpDgdXeW0riuuVHj1FGKKfC1wAe7xB5wykXcRkuog4VlSx2/V+mPpSMDZ1POsAxKOAMYkxfj+qoDIBfDc0R1cbxFehgmCHc8a4z+IjP5eiUvX3HjeV7ZBTR5hkYKHAJfeU6Cj5HQsTwwJrc+oHuosokgJ/ct0+WpvqhalUoL8dpoLUY6PQq+5CeOJrpZeLzXZTIPLWTA4jbbkHa/SwmAk07+hpxpFz3NhSfpT4GfOgKnowPfo+3mJMDAetTMZpizTdfPfc13gl7Zyqb9cB8lgx1IVzN6ZrxPyvPqj05uPk= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:01 localhost python3[35886]: ansible-ping Invoked with data=pong
Nov 23 02:47:12 localhost sshd[35888]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:47:12 localhost systemd[1]: Created slice User Slice of UID 1003.
Nov 23 02:47:12 localhost systemd[1]: Starting User Runtime Directory /run/user/1003...
Nov 23 02:47:12 localhost systemd-logind[761]: New session 28 of user tripleo-admin.
Nov 23 02:47:12 localhost systemd[1]: Finished User Runtime Directory /run/user/1003.
Nov 23 02:47:12 localhost systemd[1]: Starting User Manager for UID 1003...
Nov 23 02:47:12 localhost systemd[35892]: Queued start job for default target Main User Target.
Nov 23 02:47:12 localhost systemd[35892]: Created slice User Application Slice.
Nov 23 02:47:12 localhost systemd[35892]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 02:47:12 localhost systemd[35892]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 02:47:12 localhost systemd[35892]: Reached target Paths.
Nov 23 02:47:12 localhost systemd[35892]: Reached target Timers.
Nov 23 02:47:12 localhost systemd[35892]: Starting D-Bus User Message Bus Socket...
Nov 23 02:47:12 localhost systemd[35892]: Starting Create User's Volatile Files and Directories...
Nov 23 02:47:12 localhost systemd[35892]: Finished Create User's Volatile Files and Directories.
Nov 23 02:47:12 localhost systemd[35892]: Listening on D-Bus User Message Bus Socket.
Nov 23 02:47:12 localhost systemd[35892]: Reached target Sockets.
Nov 23 02:47:12 localhost systemd[35892]: Reached target Basic System.
Nov 23 02:47:12 localhost systemd[35892]: Reached target Main User Target.
Nov 23 02:47:12 localhost systemd[35892]: Startup finished in 124ms.
Nov 23 02:47:12 localhost systemd[1]: Started User Manager for UID 1003.
Nov 23 02:47:12 localhost systemd[1]: Started Session 28 of User tripleo-admin.
Nov 23 02:47:13 localhost python3[35953]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 02:47:18 localhost python3[35973]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Nov 23 02:47:19 localhost python3[35989]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Nov 23 02:47:19 localhost python3[36037]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.teob0ebztmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:20 localhost python3[36067]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.teob0ebztmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:21 localhost python3[36083]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.teob0ebztmphosts insertbefore=BOF block=172.17.0.106 np0005532584.localdomain np0005532584#012172.18.0.106 np0005532584.storage.localdomain np0005532584.storage#012172.20.0.106 np0005532584.storagemgmt.localdomain np0005532584.storagemgmt#012172.17.0.106 np0005532584.internalapi.localdomain np0005532584.internalapi#012172.19.0.106 np0005532584.tenant.localdomain np0005532584.tenant#012192.168.122.106 np0005532584.ctlplane.localdomain np0005532584.ctlplane#012172.17.0.107 np0005532585.localdomain np0005532585#012172.18.0.107 np0005532585.storage.localdomain np0005532585.storage#012172.20.0.107 np0005532585.storagemgmt.localdomain np0005532585.storagemgmt#012172.17.0.107 np0005532585.internalapi.localdomain np0005532585.internalapi#012172.19.0.107 np0005532585.tenant.localdomain np0005532585.tenant#012192.168.122.107 np0005532585.ctlplane.localdomain np0005532585.ctlplane#012172.17.0.108 np0005532586.localdomain np0005532586#012172.18.0.108 np0005532586.storage.localdomain np0005532586.storage#012172.20.0.108 np0005532586.storagemgmt.localdomain np0005532586.storagemgmt#012172.17.0.108 np0005532586.internalapi.localdomain np0005532586.internalapi#012172.19.0.108 np0005532586.tenant.localdomain np0005532586.tenant#012192.168.122.108 np0005532586.ctlplane.localdomain np0005532586.ctlplane#012172.17.0.103 np0005532581.localdomain np0005532581#012172.18.0.103 np0005532581.storage.localdomain np0005532581.storage#012172.20.0.103 np0005532581.storagemgmt.localdomain np0005532581.storagemgmt#012172.17.0.103 np0005532581.internalapi.localdomain np0005532581.internalapi#012172.19.0.103 np0005532581.tenant.localdomain np0005532581.tenant#012192.168.122.103 np0005532581.ctlplane.localdomain np0005532581.ctlplane#012172.17.0.104 np0005532582.localdomain np0005532582#012172.18.0.104 np0005532582.storage.localdomain np0005532582.storage#012172.20.0.104 np0005532582.storagemgmt.localdomain np0005532582.storagemgmt#012172.17.0.104 np0005532582.internalapi.localdomain np0005532582.internalapi#012172.19.0.104 np0005532582.tenant.localdomain np0005532582.tenant#012192.168.122.104 np0005532582.ctlplane.localdomain np0005532582.ctlplane#012172.17.0.105 np0005532583.localdomain np0005532583#012172.18.0.105 np0005532583.storage.localdomain np0005532583.storage#012172.20.0.105 np0005532583.storagemgmt.localdomain np0005532583.storagemgmt#012172.17.0.105 np0005532583.internalapi.localdomain np0005532583.internalapi#012172.19.0.105 np0005532583.tenant.localdomain np0005532583.tenant#012192.168.122.105 np0005532583.ctlplane.localdomain np0005532583.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99  overcloud.ctlplane.localdomain#012172.18.0.204  overcloud.storage.localdomain#012172.20.0.141  overcloud.storagemgmt.localdomain#012172.17.0.224  overcloud.internalapi.localdomain#012172.21.0.154  overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:22 localhost python3[36099]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.teob0ebztmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:47:22 localhost python3[36116]: ansible-file Invoked with path=/tmp/ansible.teob0ebztmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:47:23 localhost python3[36132]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:47:24 localhost python3[36149]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:47:28 localhost python3[36168]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:47:29 localhost python3[36214]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:48:13 localhost sshd[36829]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:48:40 localhost kernel: SELinux:  Converting 2700 SID table entries...
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:48:40 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:48:40 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 23 02:48:40 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:48:40 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:48:40 localhost systemd[1]: Reloading.
Nov 23 02:48:40 localhost systemd-rc-local-generator[37122]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:48:40 localhost systemd-sysv-generator[37125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:48:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:48:40 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:48:41 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:48:41 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:48:41 localhost systemd[1]: run-r1037ca08a7414eae944386e8a3db2470.service: Deactivated successfully.
Nov 23 02:48:42 localhost python3[37552]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:44 localhost python3[37691]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:48:44 localhost systemd[1]: Reloading.
Nov 23 02:48:44 localhost systemd-sysv-generator[37723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:48:44 localhost systemd-rc-local-generator[37718]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:48:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:48:45 localhost python3[37745]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:46 localhost python3[37761]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:47 localhost python3[37778]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 02:48:47 localhost python3[37796]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:48 localhost python3[37814]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:48 localhost python3[37832]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:48:48 localhost systemd[1]: Reloading Network Manager...
Nov 23 02:48:48 localhost NetworkManager[5990]: <info>  [1763884128.7638] audit: op="reload" arg="0" pid=37835 uid=0 result="success"
Nov 23 02:48:48 localhost NetworkManager[5990]: <info>  [1763884128.7648] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Nov 23 02:48:48 localhost NetworkManager[5990]: <info>  [1763884128.7648] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 23 02:48:48 localhost systemd[1]: Reloaded Network Manager.
Nov 23 02:48:50 localhost python3[37851]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:50 localhost python3[37868]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:48:50 localhost python3[37886]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:48:51 localhost python3[37902]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:52 localhost python3[37918]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 02:48:52 localhost python3[37934]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:48:53 localhost python3[37950]: ansible-blockinfile Invoked with path=/tmp/ansible.npsgagus block=[192.168.122.106]*,[np0005532584.ctlplane.localdomain]*,[172.17.0.106]*,[np0005532584.internalapi.localdomain]*,[172.18.0.106]*,[np0005532584.storage.localdomain]*,[172.20.0.106]*,[np0005532584.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005532584.tenant.localdomain]*,[np0005532584.localdomain]*,[np0005532584]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3OrbPXlomvlluk5pGQwXwJu+cR1IMLHg5EnGcI5epB1SB6q/EzlEo5+bOYmmvILsoesUzBIBq21mRhn1Wi2yjlys0pArFDqiLkUBvTW9ro6MKci9Smc12m7AkLus6UO6h3pzqcOdRZQ3KOQDL/83yYJVBCJyqlISXWzzHJpGRVnZHeT4CgKZ1nG5UEvOrtPXRAVWkz3v5TghJrYXvWaPQPmWcEy1rfhCjkCfQY++JB/Dlgammmd1+ZldadeXQi1b2X02a6GFyW0pUMFLjAP7Wr+KcRa5FIPmGwsPuc1NhveAH6zyLrabrh7jPR5O0tBjz9KcNYXbQmJetGt9ZWzFsl0qzXrvI38q5RlGptbqg0iSez61VBAUtnfs33hnYc3dvzJKXReR76PoU3yu/tLrhdK6szqIVsMdw2LGEro7l3KKMKXHSpi8n77fH8ICiU3F5Oif+nvS/e7xr4LccSEnFEHA9PdNxOWxJYLcxTQCt3BkNFrWw4oB1LiDsn98HlS8=#012[192.168.122.107]*,[np0005532585.ctlplane.localdomain]*,[172.17.0.107]*,[np0005532585.internalapi.localdomain]*,[172.18.0.107]*,[np0005532585.storage.localdomain]*,[172.20.0.107]*,[np0005532585.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005532585.tenant.localdomain]*,[np0005532585.localdomain]*,[np0005532585]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU6ocW8HWtJJyWPSFUqcN5z70XYnNrE5KeWh/VJ4bDkpVePpxxcdD8r8cKL121q0MKPRgia3jLqnKz+o4MH3AqTAWCZamBc1+ePq9OvZDenK69byea8TM176uYzfePjNlud4LSZ6lfkgneO5jeNE6/RcHgBc8Me+2mlzpavioA814r6Ci6hFaEIOS1Zd2b/yKzI4QRl6xg/aJKvlIe9w3G3BvKOG5pixPx2ng4wYc0OMtJb9ItJgZLY92GGuvVRwn9e0D4lab84+x/Nn3XatQdqU69ev7da/bQCUeBivyEZo03olh56YxCKvNfG3ZYwwhMTn9Hg/EdnwrGHYHj0ZgfSR1+Dzvnk0WW/MRs0276Ojj5O0hhnlaAh5n97W6fgHldGKvdEafYeD602C1Zkd+ISqF13W56MWhtUhiUsdUHShnpM/EBOITg6mTDFP1i/qMS0PjRaCzBpdqpJIoKzQpsi4Z3QTHTZ7uK/lqOEaE/wqXHuYlMKcTuOuX33gIp28k=#012[192.168.122.108]*,[np0005532586.ctlplane.localdomain]*,[172.17.0.108]*,[np0005532586.internalapi.localdomain]*,[172.18.0.108]*,[np0005532586.storage.localdomain]*,[172.20.0.108]*,[np0005532586.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005532586.tenant.localdomain]*,[np0005532586.localdomain]*,[np0005532586]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6U4JggC29IKqxQ7GjhK23AehQb1S2zLryOxLwLEs9rP0qOZpJ9wR1VsBNLXDCmoRVTsH2+3V00hmkvlanKUuzgmLO61hdur+5NQD0xHnY7lOLpOoyR7hJiMuHj/nRgBLWY2OB8Gim121dgfuc2zRF92igDYe65Uf0et83vWlgRmc7KlziaJ91iVcBUmhGYf3Ij7QxfhQH5TTnGoQizdiBpuP+yVuU2AepbvQ8ZFvzioCwzWAVu/xfdRFp9QyLT4JP1jM6dadTjD5RUAjRL6qR1tLXVq/rvqtXSL8ruBSYm3NCOys9RtdrNolZ7frd+zmvF+VzMNLtlRxiuy1ReR+ZO3felB+4TwfEfLZ+DqE1s3+ksCQH/sVCrxzFsRz5lamWG3p78ZBWTiQ/7WdJS1dQOHz+pKNSSW/NYMIqitxsCsEWPJLq/EWoHVxvjREucCb5YvWHPKOv5RLlbm5lSHFLuFVV8O3AAzD/3JsjTbKGOjJhmtxPCgEy7RPqtIUX90s=#012[192.168.122.103]*,[np0005532581.ctlplane.localdomain]*,[172.17.0.103]*,[np0005532581.internalapi.localdomain]*,[172.18.0.103]*,[np0005532581.storage.localdomain]*,[172.20.0.103]*,[np0005532581.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005532581.tenant.localdomain]*,[np0005532581.localdomain]*,[np0005532581]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRibSMIP5+E9lJWuaKDEuCaJoGhGPTqff+o8SP2Twk+NhPOa5FC7WQhHPLXVhKAtlCX60ckYE53Q/H/RVRZ55JdWQLSdY/1tQCD6c0Ry6N+UD+mxo9iN9cHk6vd6J5kJu+v/gBEmFY1A9pjzsD1CTR8gZJHZFqbUTzXrKkoUjK3Kqa8UtvzyhgYQtYIaUwaf1z7CMNQ3A4EaGVKyRsVwb11jlaT9fjB43E3tp9p5EG6PPJEGux/Xea6iHnhSwZHpkD/ylneDOkBbGvYKhL33bpXMcbuHy32jAFr+2Q07sKvgy/b5/f/nTgNCyxEIpoXUbEhX+Vlh+gycU7KJw6FRyR3dQFjooV97NQ/oov2VP9DnTObziZA8lhaJ20ChTfDVUyvFCFi3dKgBUPCeNWCGI69eNHu3dQcwCNJ3kANqhHdkYpBd00PVBritJfxfzH1DCLo0I9CSi1buWYhein9VHZWtzePv/+ucWERRIo+J04QPkV+6P6vgOTRl5U75RctJU=#012[192.168.122.104]*,[np0005532582.ctlplane.localdomain]*,[172.17.0.104]*,[np0005532582.internalapi.localdomain]*,[172.18.0.104]*,[np0005532582.storage.localdomain]*,[172.20.0.104]*,[np0005532582.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005532582.tenant.localdomain]*,[np0005532582.localdomain]*,[np0005532582]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0v47OVdr7YS/5xSUmMc7u26O7OwPomkdDR6s8rrcencbx7seRSeU00QGeRQcJJ023bD3xk26W8iiJTRUDkYSy//cSfHODdDy+CNEfDUTkGzIjiApoLi2b+S4J6wcAldMsj02MZmx67vUHyM5Qwok+22XqopryL8BiGPJbnoUcZy773f5OKPPMNuj3Fyb7jd5mrC7awK4NniZHyHPYBQeBa234HL42fRjcOqCcxuauy5cbz9PeBv5/kg+nYc8cY5qCyLqNhzMVRUa/PcepMBcfThk17LtPGzCYS7IR2cGdUDP6Pe0QD34Hu6+mpwKwYx73v5uHcmy9CeZ8fK83/F84Lr6jxsiwoU2e+hUfzVRq8gnkjk6kuL86eSM2POSGgBYYgCb+Ma6lOkF1MA+rLAh0gAsUhBgVlz6HtaMoDvLOi/NrQeoQyNE1Pv4vPAndmGGc8A7JCtmCMk9VvMy0Ht4IOvtDJFfx1lg7NuMIKqePYTEk56p8wTUNM+BmdJEhFPU=#012[192.168.122.105]*,[np0005532583.ctlplane.localdomain]*,[172.17.0.105]*,[np0005532583.internalapi.localdomain]*,[172.18.0.105]*,[np0005532583.storage.localdomain]*,[172.20.0.105]*,[np0005532583.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005532583.tenant.localdomain]*,[np0005532583.localdomain]*,[np0005532583]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkB1Cq8AQaEBYTlv5Hzs024jg//D6wieNnvsI5WcYj7wckm9vKTJQfUD6yZBMmyPw6+vVzsM16bj2hagkDR5wkO7uSIaMqWrcoQ1h9HkJQLK8QB0iuzUvQzdr22kUgkLII8thNHK4VxF4VhAKNmzqCofZ4ZSaLUMwauFCFUjx1VJISEZdgYRZ4+++wAN5bdK+WrwSOAHJYJWQX2pRRsPiunSdY1BOUKB3sp7IBcQ3MDJgnKlkR7tiGSYB2W8JsLvIsIb0I2EaqmPUTIzKUuxSJnWEls/WyDT9MNkjhobVeAyFZ5TEik4OvobUhVGJ8CsU7O101KQNQ3IywPM+V0UpjA1yK49z5Qs0LjApmqORsTcjOojYaKGr9n64dVjXdFOMwajB9UmMEFtlIngm6kx7mJQGXqYxVAscW34JY832iKOEzQWrUSdo6mVJ7TXhYYcbdFp+G/128SfhNrbHwKinHeE9Nqu48BR7bmRZXO7ef+UMY1dG3AIvFt4JwFvLihZc=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:54 localhost python3[37966]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.npsgagus' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:54 localhost python3[37984]: ansible-file Invoked with path=/tmp/ansible.npsgagus state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:48:55 localhost python3[38000]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:48:55 localhost python3[38016]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:55 localhost python3[38034]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:56 localhost python3[38053]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Nov 23 02:48:59 localhost python3[38190]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:48:59 localhost python3[38207]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:49:02 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:49:02 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:49:02 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:49:02 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:49:02 localhost systemd[1]: Reloading.
Nov 23 02:49:02 localhost systemd-rc-local-generator[38276]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:49:02 localhost systemd-sysv-generator[38282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:49:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:49:03 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:49:03 localhost systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 02:49:03 localhost systemd[1]: tuned.service: Deactivated successfully.
Nov 23 02:49:03 localhost systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 02:49:03 localhost systemd[1]: tuned.service: Consumed 1.684s CPU time.
Nov 23 02:49:03 localhost systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 02:49:03 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:49:03 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:49:03 localhost systemd[1]: run-re294a593402a457d84b3949d9411fcc5.service: Deactivated successfully.
Nov 23 02:49:04 localhost systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 02:49:04 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:49:04 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:49:04 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:49:04 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:49:04 localhost systemd[1]: run-rb4598ea15c144382a2eca871baf44b1c.service: Deactivated successfully.
Nov 23 02:49:05 localhost python3[38643]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:49:05 localhost systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 02:49:05 localhost systemd[1]: tuned.service: Deactivated successfully.
Nov 23 02:49:05 localhost systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 02:49:05 localhost systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 02:49:07 localhost systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 02:49:07 localhost python3[38838]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:08 localhost python3[38855]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Nov 23 02:49:08 localhost python3[38871]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:49:09 localhost python3[38887]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:11 localhost python3[38907]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:11 localhost python3[38924]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:49:14 localhost python3[38940]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:18 localhost python3[38956]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:19 localhost python3[39004]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:19 localhost python3[39049]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884159.124733-71273-67457931928919/source _original_basename=tmph_d861w8 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:20 localhost python3[39079]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:20 localhost python3[39127]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:21 localhost python3[39170]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884160.7266126-71414-91353825910413/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=72c5ef7909b5cdbbb2310fa1b5c8d166a17f7155 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:21 localhost python3[39232]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:22 localhost python3[39275]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884161.5357385-71466-92094187006643/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=6552073e0e4bb04b7faeda3f8c2098edf889171a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:22 localhost python3[39337]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:23 localhost python3[39380]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884162.4584785-71466-245367218457031/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=1bc51567bc68ec6d87ea2fcfee756b886ebb9f92 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:23 localhost python3[39442]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:24 localhost python3[39485]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884163.412118-71466-4497848111410/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:24 localhost python3[39547]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:25 localhost python3[39590]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884164.3780494-71466-181025661109904/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:25 localhost python3[39652]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:25 localhost python3[39695]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884165.273968-71466-268610164284881/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=475826538d3153c50ae2703c38a7a220adfefe4f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:26 localhost python3[39757]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:26 localhost python3[39800]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884166.089926-71466-98133377951905/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:27 localhost python3[39862]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:27 localhost python3[39905]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884166.9259744-71466-85397281291812/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=66f0a2c6a0832caadadc4d66bd975147c152464b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:28 localhost python3[39967]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:28 localhost python3[40010]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884167.7297149-71466-236992446406115/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:28 localhost python3[40072]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:29 localhost python3[40115]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884168.6169443-71466-229549652432797/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:29 localhost python3[40177]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:30 localhost python3[40220]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884169.4681497-71466-186701186747280/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=d74faadf145d728deba6428eccdab8d17c324656 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:31 localhost python3[40250]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:49:31 localhost python3[40298]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:49:32 localhost python3[40371]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884171.5349455-72255-231167331170617/source _original_basename=tmpdsbkspd8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:49:34 localhost systemd[35892]: Starting Mark boot as successful...
Nov 23 02:49:34 localhost systemd[35892]: Finished Mark boot as successful.
Nov 23 02:49:37 localhost python3[40448]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 02:49:37 localhost python3[40509]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:42 localhost python3[40526]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:47 localhost python3[40543]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:47 localhost python3[40566]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:52 localhost python3[40583]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:53 localhost python3[40606]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:49:57 localhost python3[40623]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:01 localhost python3[40640]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:02 localhost python3[40663]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:06 localhost python3[40680]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:11 localhost python3[40697]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:11 localhost python3[40720]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:16 localhost python3[40737]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:17 localhost sshd[40739]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:50:20 localhost python3[40756]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:21 localhost python3[40779]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:25 localhost python3[40796]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:31 localhost python3[40813]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:32 localhost python3[40861]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:32 localhost python3[40879]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpawxg0owf recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:32 localhost python3[40909]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:33 localhost python3[40987]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:33 localhost python3[41020]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:34 localhost python3[41099]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:34 localhost python3[41117]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:35 localhost python3[41179]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:35 localhost python3[41197]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:35 localhost python3[41259]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:36 localhost python3[41277]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:36 localhost python3[41354]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:36 localhost python3[41372]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:37 localhost python3[41434]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:37 localhost python3[41452]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:38 localhost python3[41514]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:38 localhost python3[41532]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:39 localhost python3[41594]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:39 localhost python3[41612]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:39 localhost python3[41674]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:40 localhost python3[41692]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:40 localhost python3[41754]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:40 localhost python3[41772]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:41 localhost python3[41834]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:41 localhost python3[41852]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:42 localhost python3[41882]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:50:42 localhost python3[41930]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:42 localhost python3[41948]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpr4ztfmr9 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:45 localhost python3[41978]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:50:51 localhost python3[41995]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:50:51 localhost python3[42013]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:50:52 localhost python3[42031]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:50:53 localhost systemd[1]: Reloading.
Nov 23 02:50:53 localhost systemd-rc-local-generator[42050]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:50:53 localhost systemd-sysv-generator[42056]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:50:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:50:53 localhost systemd[1]: Starting Netfilter Tables...
Nov 23 02:50:53 localhost systemd[1]: Finished Netfilter Tables.
Nov 23 02:50:53 localhost python3[42121]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:54 localhost python3[42164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884253.6966233-75117-91516219969625/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:54 localhost python3[42194]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:55 localhost python3[42212]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:50:55 localhost python3[42261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:56 localhost python3[42304]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884255.4977374-75228-108910559468463/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:56 localhost python3[42366]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:57 localhost python3[42409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884256.4683867-75307-118742907137084/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:57 localhost python3[42471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:58 localhost python3[42514]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884257.440099-75418-194372745018953/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:58 localhost python3[42576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:50:59 localhost python3[42619]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884258.2934062-75456-80616822325236/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:50:59 localhost python3[42681]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:00 localhost python3[42724]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884259.2654884-75527-182590773020487/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:00 localhost python3[42754]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:01 localhost python3[42819]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:01 localhost python3[42836]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:02 localhost python3[42853]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:02 localhost python3[42872]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:03 localhost python3[42888]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:03 localhost python3[42904]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:04 localhost python3[42920]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 02:51:04 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Nov 23 02:51:05 localhost python3[42940]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Nov 23 02:51:05 localhost kernel: SELinux:  Converting 2704 SID table entries...
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:51:05 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:51:06 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 23 02:51:06 localhost python3[42962]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Nov 23 02:51:07 localhost kernel: SELinux:  Converting 2704 SID table entries...
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:51:07 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:51:07 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 23 02:51:07 localhost python3[42994]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Nov 23 02:51:08 localhost kernel: SELinux:  Converting 2704 SID table entries...
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:51:08 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:51:08 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 23 02:51:09 localhost python3[43016]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:09 localhost python3[43032]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:09 localhost python3[43048]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:10 localhost python3[43064]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:51:10 localhost python3[43080]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:11 localhost python3[43097]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:51:14 localhost python3[43114]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:15 localhost python3[43162]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:15 localhost python3[43205]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884275.0814538-76390-221269962235748/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:16 localhost python3[43235]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:51:16 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 02:51:16 localhost systemd[1]: Stopped Load Kernel Modules.
Nov 23 02:51:16 localhost systemd[1]: Stopping Load Kernel Modules...
Nov 23 02:51:16 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 02:51:16 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 23 02:51:16 localhost kernel: Bridge firewalling registered
Nov 23 02:51:16 localhost systemd-modules-load[43238]: Inserted module 'br_netfilter'
Nov 23 02:51:16 localhost systemd-modules-load[43238]: Module 'msr' is built in
Nov 23 02:51:16 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 02:51:16 localhost python3[43289]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:17 localhost python3[43332]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884276.5512564-76428-82782719621667/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:17 localhost python3[43362]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:18 localhost python3[43379]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:18 localhost python3[43397]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:18 localhost python3[43415]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:19 localhost python3[43432]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:19 localhost python3[43449]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:19 localhost python3[43466]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:19 localhost python3[43484]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:20 localhost python3[43502]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:20 localhost python3[43520]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:20 localhost python3[43538]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:21 localhost python3[43556]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:21 localhost python3[43574]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:21 localhost python3[43592]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:22 localhost python3[43609]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:22 localhost python3[43626]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:22 localhost python3[43643]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:23 localhost python3[43660]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Nov 23 02:51:23 localhost python3[43678]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 02:51:23 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 02:51:23 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 23 02:51:23 localhost systemd[1]: Stopping Apply Kernel Variables...
Nov 23 02:51:23 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 02:51:23 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 02:51:23 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 02:51:24 localhost python3[43698]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:24 localhost python3[43714]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:24 localhost python3[43730]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:25 localhost python3[43746]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:51:25 localhost python3[43762]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:25 localhost python3[43778]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:26 localhost python3[43794]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:26 localhost python3[43810]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:26 localhost python3[43826]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:27 localhost python3[43874]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:27 localhost python3[43917]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884286.8307168-76808-221217493291248/source _original_basename=tmphh7qmu6p follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:27 localhost python3[43947]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:29 localhost python3[43964]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:30 localhost python3[44012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:30 localhost python3[44055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884289.8065877-77044-67332819028185/source _original_basename=tmp0j8vz840 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:30 localhost python3[44085]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:31 localhost python3[44101]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:31 localhost python3[44117]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:31 localhost python3[44133]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:32 localhost python3[44149]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:32 localhost python3[44165]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:32 localhost python3[44181]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:33 localhost python3[44197]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:33 localhost python3[44213]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:33 localhost python3[44229]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Nov 23 02:51:34 localhost python3[44251]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 23 02:51:34 localhost python3[44275]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Nov 23 02:51:35 localhost python3[44291]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:35 localhost python3[44340]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:36 localhost python3[44383]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884295.5423045-77382-213992385910768/source _original_basename=tmpde59ujwa follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:36 localhost python3[44443]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 23 02:51:36 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 23 02:51:37 localhost python3[44532]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:38 localhost python3[44565]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:38 localhost python3[44581]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Nov 23 02:51:39 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 23 02:51:39 localhost python3[44616]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:51:42 localhost python3[44633]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 02:51:43 localhost python3[44694]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:43 localhost python3[44710]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:44 localhost python3[44770]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:44 localhost python3[44813]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884304.2036731-77814-131480681895649/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=bc9d438deb461641c72d374f5fea3ff2c0b21989 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:45 localhost python3[44875]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:45 localhost python3[44920]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884305.158989-77871-239674805053658/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:46 localhost python3[44950]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:46 localhost python3[44966]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:46 localhost python3[44982]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:47 localhost python3[44998]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:47 localhost python3[45046]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:48 localhost python3[45089]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884307.6528645-77992-95204782247102/source _original_basename=tmpoiulo6s2 follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:48 localhost python3[45119]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:49 localhost python3[45135]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:51:49 localhost python3[45151]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:51:53 localhost python3[45200]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:51:53 localhost python3[45245]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884313.0107727-78224-29110318889735/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:51:54 localhost python3[45276]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:51:54 localhost systemd[1]: Stopping OpenSSH server daemon...
Nov 23 02:51:54 localhost systemd[1]: sshd.service: Deactivated successfully.
Nov 23 02:51:54 localhost systemd[1]: Stopped OpenSSH server daemon.
Nov 23 02:51:54 localhost systemd[1]: sshd.service: Consumed 4.333s CPU time, read 1.9M from disk, written 160.0K to disk.
Nov 23 02:51:54 localhost systemd[1]: Stopped target sshd-keygen.target.
Nov 23 02:51:54 localhost systemd[1]: Stopping sshd-keygen.target...
Nov 23 02:51:54 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 02:51:54 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 02:51:54 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 02:51:54 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 02:51:54 localhost systemd[1]: Starting OpenSSH server daemon...
Nov 23 02:51:54 localhost sshd[45280]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:51:54 localhost systemd[1]: Started OpenSSH server daemon.
Nov 23 02:51:54 localhost python3[45296]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:55 localhost python3[45314]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:51:56 localhost python3[45332]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:52:00 localhost python3[45381]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:00 localhost python3[45399]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:01 localhost python3[45429]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 02:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3255 writes, 16K keys, 3255 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3255 writes, 143 syncs, 22.76 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3255 writes, 16K keys, 3255 commit groups, 1.0 writes per commit group, ingest: 14.65 MB, 0.02 MB/s#012Interval WAL: 3255 writes, 143 syncs, 22.76 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Nov 23 02:52:02 localhost python3[45479]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:02 localhost python3[45497]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:03 localhost python3[45527]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:52:03 localhost systemd[1]: Reloading.
Nov 23 02:52:03 localhost systemd-rc-local-generator[45548]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:52:03 localhost systemd-sysv-generator[45554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:52:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:52:03 localhost systemd[1]: Starting chronyd online sources service...
Nov 23 02:52:03 localhost chronyc[45566]: 200 OK
Nov 23 02:52:03 localhost systemd[1]: chrony-online.service: Deactivated successfully.
Nov 23 02:52:03 localhost systemd[1]: Finished chronyd online sources service.
Nov 23 02:52:03 localhost python3[45582]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:03 localhost chronyd[25884]: System clock was stepped by 0.000072 seconds
Nov 23 02:52:04 localhost python3[45599]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:04 localhost python3[45616]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:04 localhost chronyd[25884]: System clock was stepped by -0.000000 seconds
Nov 23 02:52:04 localhost python3[45633]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:05 localhost python3[45650]: ansible-timezone Invoked with name=UTC hwclock=None
Nov 23 02:52:05 localhost systemd[1]: Starting Time & Date Service...
Nov 23 02:52:05 localhost systemd[1]: Started Time & Date Service.
Nov 23 02:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 02:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3384 writes, 16K keys, 3384 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3384 writes, 196 syncs, 17.27 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3384 writes, 16K keys, 3384 commit groups, 1.0 writes per commit group, ingest: 15.25 MB, 0.03 MB/s#012Interval WAL: 3384 writes, 196 syncs, 17.27 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Nov 23 02:52:06 localhost python3[45670]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:07 localhost python3[45687]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:07 localhost python3[45704]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Nov 23 02:52:08 localhost python3[45720]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:52:09 localhost python3[45736]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:52:09 localhost python3[45752]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:52:09 localhost python3[45800]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:10 localhost python3[45843]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884329.495665-79329-98183214952255/source _original_basename=tmpmdugm09t follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:10 localhost python3[45905]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:11 localhost python3[45948]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884330.470098-79455-237322720262032/source _original_basename=tmpz7i7s_ct follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:11 localhost python3[45978]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 02:52:11 localhost systemd[1]: Reloading.
Nov 23 02:52:11 localhost systemd-sysv-generator[46008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:52:11 localhost systemd-rc-local-generator[46003]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:52:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:52:12 localhost python3[46032]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:52:12 localhost python3[46048]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:12 localhost python3[46065]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:13 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Nov 23 02:52:13 localhost systemd[35892]: Created slice User Background Tasks Slice.
Nov 23 02:52:13 localhost systemd[35892]: Starting Cleanup of User's Temporary Files and Directories...
Nov 23 02:52:13 localhost systemd[35892]: Finished Cleanup of User's Temporary Files and Directories.
Nov 23 02:52:13 localhost python3[46083]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:52:13 localhost python3[46099]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:14 localhost python3[46147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:14 localhost python3[46190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884333.9893844-79638-34247386781157/source _original_basename=tmpeupmptp8 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:23 localhost sshd[46205]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:52:35 localhost systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 02:52:37 localhost python3[46225]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:52:38 localhost python3[46241]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Nov 23 02:52:38 localhost python3[46271]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:52:38 localhost python3[46310]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:39 localhost python3[46351]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:39 localhost python3[46367]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Nov 23 02:52:40 localhost kernel: SELinux:  Converting 2707 SID table entries...
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:52:40 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:52:40 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 23 02:52:40 localhost python3[46504]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:52:42 localhost python3[46642]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [
Nov 23 02:52:42 localhost rsyslogd[760]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Nov 23 02:52:43 localhost python3[46658]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:52:43 localhost python3[46674]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:52:44 localhost python3[46690]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}}
Nov 23 02:52:49 localhost python3[46738]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:52:49 localhost python3[46781]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884369.0505648-81104-233950779742281/source _original_basename=tmp66gxoylv follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:52:50 localhost python3[46811]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:52:52 localhost python3[46934]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:52:54 localhost python3[47055]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 02:52:56 localhost python3[47071]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:52:57 localhost python3[47088]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 02:53:01 localhost dbus-broker-launch[15940]: Noticed file-system modification, trigger reload.
Nov 23 02:53:01 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:53:01 localhost dbus-broker-launch[15940]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 23 02:53:01 localhost dbus-broker-launch[15940]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 23 02:53:01 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:53:01 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:53:01 localhost systemd[1]: Reexecuting.
Nov 23 02:53:01 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 23 02:53:01 localhost systemd[1]: Detected virtualization kvm.
Nov 23 02:53:01 localhost systemd[1]: Detected architecture x86-64.
Nov 23 02:53:01 localhost systemd-sysv-generator[47146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:53:01 localhost systemd-rc-local-generator[47142]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:53:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:53:09 localhost kernel: SELinux:  Converting 2707 SID table entries...
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 02:53:09 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 02:53:10 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:53:10 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 23 02:53:10 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 02:53:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:53:11 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 02:53:11 localhost systemd[1]: Reloading.
Nov 23 02:53:11 localhost systemd-sysv-generator[47223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:53:11 localhost systemd-rc-local-generator[47219]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:53:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:53:11 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 02:53:11 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:53:11 localhost systemd-journald[619]: Journal stopped
Nov 23 02:53:11 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd).
Nov 23 02:53:11 localhost systemd[1]: Stopping Journal Service...
Nov 23 02:53:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 23 02:53:11 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 23 02:53:11 localhost systemd[1]: Stopped Journal Service.
Nov 23 02:53:11 localhost systemd[1]: systemd-journald.service: Consumed 1.843s CPU time.
Nov 23 02:53:11 localhost systemd[1]: Starting Journal Service...
Nov 23 02:53:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 23 02:53:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 23 02:53:11 localhost systemd[1]: systemd-udevd.service: Consumed 2.962s CPU time.
Nov 23 02:53:11 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 23 02:53:11 localhost systemd-journald[47537]: Journal started
Nov 23 02:53:11 localhost systemd-journald[47537]: Runtime Journal (/run/log/journal/6e0090cd4cf296f54418e234b90f721c) is 12.2M, max 314.7M, 302.5M free.
Nov 23 02:53:11 localhost systemd[1]: Started Journal Service.
Nov 23 02:53:11 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Nov 23 02:53:11 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 02:53:11 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 02:53:11 localhost systemd-udevd[47548]: Using default interface naming scheme 'rhel-9.0'.
Nov 23 02:53:11 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 23 02:53:11 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 02:53:11 localhost systemd[1]: Reloading.
Nov 23 02:53:11 localhost systemd-sysv-generator[48104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:53:11 localhost systemd-rc-local-generator[48096]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:53:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:53:12 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 02:53:12 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 02:53:12 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 02:53:12 localhost systemd[1]: man-db-cache-update.service: Consumed 1.450s CPU time.
Nov 23 02:53:12 localhost systemd[1]: run-r08531148528e4ab18ae90e753c2ab063.service: Deactivated successfully.
Nov 23 02:53:12 localhost systemd[1]: run-r8e53c4b6de8a47068b040ce076f2721a.service: Deactivated successfully.
Nov 23 02:53:13 localhost python3[48583]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Nov 23 02:53:14 localhost python3[48602]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 02:53:15 localhost python3[48620]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:53:15 localhost python3[48620]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json
Nov 23 02:53:15 localhost python3[48620]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false
Nov 23 02:53:22 localhost podman[48633]: 2025-11-23 07:53:15.784754017 +0000 UTC m=+0.024851460 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:53:22 localhost python3[48620]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json
Nov 23 02:53:23 localhost python3[48735]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:53:23 localhost python3[48735]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Nov 23 02:53:23 localhost python3[48735]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Nov 23 02:53:30 localhost podman[48748]: 2025-11-23 07:53:23.473630201 +0000 UTC m=+0.043375228 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Nov 23 02:53:30 localhost python3[48735]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json
Nov 23 02:53:30 localhost python3[48851]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:53:30 localhost python3[48851]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Nov 23 02:53:30 localhost python3[48851]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Nov 23 02:53:47 localhost podman[48863]: 2025-11-23 07:53:30.922084155 +0000 UTC m=+0.045316089 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 02:53:47 localhost python3[48851]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json
Nov 23 02:53:47 localhost podman[49550]: 2025-11-23 07:53:47.788836592 +0000 UTC m=+0.121694099 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vcs-type=git, io.buildah.version=1.33.12)
Nov 23 02:53:47 localhost python3[49557]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:53:47 localhost python3[49557]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Nov 23 02:53:47 localhost podman[49550]: 2025-11-23 07:53:47.870917564 +0000 UTC m=+0.203775081 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, RELEASE=main)
Nov 23 02:53:47 localhost python3[49557]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Nov 23 02:54:01 localhost podman[49607]: 2025-11-23 07:53:47.995904868 +0000 UTC m=+0.042011562 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 02:54:01 localhost python3[49557]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json
Nov 23 02:54:02 localhost python3[49783]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:02 localhost python3[49783]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json
Nov 23 02:54:02 localhost python3[49783]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false
Nov 23 02:54:08 localhost podman[49796]: 2025-11-23 07:54:02.325986514 +0000 UTC m=+0.049554720 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Nov 23 02:54:08 localhost python3[49783]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json
Nov 23 02:54:08 localhost python3[50081]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:08 localhost python3[50081]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json
Nov 23 02:54:08 localhost python3[50081]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false
Nov 23 02:54:13 localhost podman[50094]: 2025-11-23 07:54:09.015272193 +0000 UTC m=+0.042193548 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Nov 23 02:54:13 localhost python3[50081]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json
Nov 23 02:54:13 localhost python3[50173]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:13 localhost python3[50173]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Nov 23 02:54:13 localhost python3[50173]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Nov 23 02:54:15 localhost podman[50186]: 2025-11-23 07:54:13.673736015 +0000 UTC m=+0.044042174 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Nov 23 02:54:15 localhost python3[50173]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json
Nov 23 02:54:16 localhost python3[50264]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:16 localhost python3[50264]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json
Nov 23 02:54:16 localhost python3[50264]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false
Nov 23 02:54:18 localhost podman[50276]: 2025-11-23 07:54:16.45927337 +0000 UTC m=+0.050756271 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Nov 23 02:54:18 localhost python3[50264]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json
Nov 23 02:54:18 localhost python3[50351]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:18 localhost python3[50351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json
Nov 23 02:54:18 localhost python3[50351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false
Nov 23 02:54:21 localhost podman[50363]: 2025-11-23 07:54:18.950368188 +0000 UTC m=+0.031187398 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Nov 23 02:54:21 localhost python3[50351]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json
Nov 23 02:54:21 localhost python3[50441]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:21 localhost python3[50441]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Nov 23 02:54:21 localhost python3[50441]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Nov 23 02:54:25 localhost podman[50453]: 2025-11-23 07:54:21.552051784 +0000 UTC m=+0.044392954 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 02:54:25 localhost python3[50441]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json
Nov 23 02:54:25 localhost python3[50544]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Nov 23 02:54:25 localhost python3[50544]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Nov 23 02:54:25 localhost python3[50544]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Nov 23 02:54:27 localhost podman[50557]: 2025-11-23 07:54:25.883287773 +0000 UTC m=+0.043369168 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Nov 23 02:54:27 localhost python3[50544]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json
Nov 23 02:54:28 localhost python3[50634]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:54:30 localhost ansible-async_wrapper.py[50806]: Invoked with 638226214367 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884469.7430294-83783-267083693667731/AnsiballZ_command.py _
Nov 23 02:54:30 localhost ansible-async_wrapper.py[50809]: Starting module and watcher
Nov 23 02:54:30 localhost ansible-async_wrapper.py[50809]: Start watching 50810 (3600)
Nov 23 02:54:30 localhost ansible-async_wrapper.py[50810]: Start module (50810)
Nov 23 02:54:30 localhost ansible-async_wrapper.py[50806]: Return async_wrapper task started.
Nov 23 02:54:30 localhost python3[50830]: ansible-ansible.legacy.async_status Invoked with jid=638226214367.50806 mode=status _async_dir=/tmp/.ansible_async
Nov 23 02:54:31 localhost sshd[50845]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:54:34 localhost puppet-user[50827]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:34 localhost puppet-user[50827]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:34 localhost puppet-user[50827]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:34 localhost puppet-user[50827]:   (file & line not available)
Nov 23 02:54:34 localhost puppet-user[50827]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:34 localhost puppet-user[50827]:   (file & line not available)
Nov 23 02:54:34 localhost puppet-user[50827]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Nov 23 02:54:34 localhost puppet-user[50827]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Nov 23 02:54:34 localhost puppet-user[50827]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.11 seconds
Nov 23 02:54:34 localhost puppet-user[50827]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Nov 23 02:54:34 localhost puppet-user[50827]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Nov 23 02:54:34 localhost puppet-user[50827]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Nov 23 02:54:34 localhost puppet-user[50827]: Notice: Applied catalog in 0.05 seconds
Nov 23 02:54:34 localhost puppet-user[50827]: Application:
Nov 23 02:54:34 localhost puppet-user[50827]:   Initial environment: production
Nov 23 02:54:34 localhost puppet-user[50827]:   Converged environment: production
Nov 23 02:54:34 localhost puppet-user[50827]:         Run mode: user
Nov 23 02:54:34 localhost puppet-user[50827]: Changes:
Nov 23 02:54:34 localhost puppet-user[50827]:            Total: 3
Nov 23 02:54:34 localhost puppet-user[50827]: Events:
Nov 23 02:54:34 localhost puppet-user[50827]:          Success: 3
Nov 23 02:54:34 localhost puppet-user[50827]:            Total: 3
Nov 23 02:54:34 localhost puppet-user[50827]: Resources:
Nov 23 02:54:34 localhost puppet-user[50827]:          Changed: 3
Nov 23 02:54:34 localhost puppet-user[50827]:      Out of sync: 3
Nov 23 02:54:34 localhost puppet-user[50827]:            Total: 10
Nov 23 02:54:34 localhost puppet-user[50827]: Time:
Nov 23 02:54:34 localhost puppet-user[50827]:         Schedule: 0.00
Nov 23 02:54:34 localhost puppet-user[50827]:             File: 0.00
Nov 23 02:54:34 localhost puppet-user[50827]:             Exec: 0.01
Nov 23 02:54:34 localhost puppet-user[50827]:           Augeas: 0.02
Nov 23 02:54:34 localhost puppet-user[50827]:   Transaction evaluation: 0.04
Nov 23 02:54:34 localhost puppet-user[50827]:   Catalog application: 0.05
Nov 23 02:54:34 localhost puppet-user[50827]:   Config retrieval: 0.14
Nov 23 02:54:34 localhost puppet-user[50827]:         Last run: 1763884474
Nov 23 02:54:34 localhost puppet-user[50827]:       Filebucket: 0.00
Nov 23 02:54:34 localhost puppet-user[50827]:            Total: 0.05
Nov 23 02:54:34 localhost puppet-user[50827]: Version:
Nov 23 02:54:34 localhost puppet-user[50827]:           Config: 1763884474
Nov 23 02:54:34 localhost puppet-user[50827]:           Puppet: 7.10.0
Nov 23 02:54:34 localhost ansible-async_wrapper.py[50810]: Module complete (50810)
Nov 23 02:54:35 localhost ansible-async_wrapper.py[50809]: Done in kid B.
Nov 23 02:54:40 localhost python3[50959]: ansible-ansible.legacy.async_status Invoked with jid=638226214367.50806 mode=status _async_dir=/tmp/.ansible_async
Nov 23 02:54:41 localhost python3[50975]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:54:42 localhost python3[50991]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:54:42 localhost python3[51039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:54:42 localhost python3[51082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884482.2548099-84074-39049894622219/source _original_basename=tmp_dx9bkbl follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 02:54:43 localhost python3[51112]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:54:44 localhost python3[51215]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Nov 23 02:54:45 localhost python3[51234]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 02:54:45 localhost python3[51250]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005532586 step=1 update_config_hash_only=False
Nov 23 02:54:46 localhost python3[51266]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:54:47 localhost python3[51282]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 02:54:47 localhost python3[51298]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 02:54:48 localhost python3[51339]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 02:54:48 localhost podman[51500]: 2025-11-23 07:54:48.88709738 +0000 UTC m=+0.092181686 container create bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:54:48 localhost podman[51500]: 2025-11-23 07:54:48.821103523 +0000 UTC m=+0.026187829 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:54:48 localhost podman[51530]: 2025-11-23 07:54:48.932254109 +0000 UTC m=+0.109090262 container create 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 02:54:48 localhost systemd[1]: Started libpod-conmon-bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b.scope.
Nov 23 02:54:48 localhost podman[51516]: 2025-11-23 07:54:48.95014242 +0000 UTC m=+0.142053891 container create 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 02:54:48 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:48 localhost podman[51557]: 2025-11-23 07:54:48.968051591 +0000 UTC m=+0.095146226 container create 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, container_name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true)
Nov 23 02:54:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:48 localhost systemd[1]: Started libpod-conmon-00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be.scope.
Nov 23 02:54:48 localhost systemd[1]: Started libpod-conmon-3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0.scope.
Nov 23 02:54:48 localhost podman[51530]: 2025-11-23 07:54:48.883437814 +0000 UTC m=+0.060273987 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Nov 23 02:54:48 localhost podman[51500]: 2025-11-23 07:54:48.991136839 +0000 UTC m=+0.196221135 container init bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Nov 23 02:54:48 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:48 localhost systemd[1]: Started libpod-conmon-82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b.scope.
Nov 23 02:54:48 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:49 localhost podman[51516]: 2025-11-23 07:54:48.879794278 +0000 UTC m=+0.071705779 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 02:54:49 localhost podman[51500]: 2025-11-23 07:54:49.002929049 +0000 UTC m=+0.208013345 container start bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=container-puppet-metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Nov 23 02:54:49 localhost podman[51500]: 2025-11-23 07:54:49.003992447 +0000 UTC m=+0.209076783 container attach bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4)
Nov 23 02:54:49 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:49 localhost podman[51530]: 2025-11-23 07:54:49.004435328 +0000 UTC m=+0.181271491 container init 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_puppet_step1, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044)
Nov 23 02:54:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab58928ede0dd62dc1c6a47bb498643050414b3c034952943115aaac1d21638e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:49 localhost podman[51530]: 2025-11-23 07:54:49.012021268 +0000 UTC m=+0.188857431 container start 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Nov 23 02:54:49 localhost podman[51530]: 2025-11-23 07:54:49.013088286 +0000 UTC m=+0.189924459 container attach 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container)
Nov 23 02:54:49 localhost podman[51557]: 2025-11-23 07:54:49.015183741 +0000 UTC m=+0.142278386 container init 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, vcs-type=git, name=rhosp17/openstack-cron, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Nov 23 02:54:49 localhost podman[51557]: 2025-11-23 07:54:49.02043548 +0000 UTC m=+0.147530135 container start 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Nov 23 02:54:49 localhost podman[51557]: 2025-11-23 07:54:49.020764168 +0000 UTC m=+0.147858813 container attach 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, container_name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-type=git)
Nov 23 02:54:49 localhost podman[51557]: 2025-11-23 07:54:48.924846874 +0000 UTC m=+0.051941519 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Nov 23 02:54:49 localhost podman[51558]: 2025-11-23 07:54:48.926669412 +0000 UTC m=+0.049328490 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Nov 23 02:54:49 localhost podman[51558]: 2025-11-23 07:54:49.848500334 +0000 UTC m=+0.971159412 container create 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, version=17.1.12, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 02:54:50 localhost podman[51516]: 2025-11-23 07:54:50.022952306 +0000 UTC m=+1.214863797 container init 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-19T00:35:22Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=)
Nov 23 02:54:50 localhost podman[51516]: 2025-11-23 07:54:50.032755453 +0000 UTC m=+1.224666944 container start 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 02:54:50 localhost podman[51516]: 2025-11-23 07:54:50.032990259 +0000 UTC m=+1.224901770 container attach 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt)
Nov 23 02:54:50 localhost systemd[1]: Started libpod-conmon-92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e.scope.
Nov 23 02:54:50 localhost systemd[1]: tmp-crun.SXFTNF.mount: Deactivated successfully.
Nov 23 02:54:50 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebbfbb226c83e40f7926e5ff88fb92af0ff068958d5052a05d57da8af7f4e7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:50 localhost podman[51558]: 2025-11-23 07:54:50.131889903 +0000 UTC m=+1.254548951 container init 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-collectd, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 02:54:50 localhost podman[51558]: 2025-11-23 07:54:50.141840374 +0000 UTC m=+1.264499432 container start 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-collectd, version=17.1.12, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 23 02:54:50 localhost podman[51558]: 2025-11-23 07:54:50.143749985 +0000 UTC m=+1.266409033 container attach 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=container-puppet-collectd, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044)
Nov 23 02:54:50 localhost podman[51422]: 2025-11-23 07:54:48.750747832 +0000 UTC m=+0.037272523 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Nov 23 02:54:50 localhost podman[51798]: 2025-11-23 07:54:50.967579848 +0000 UTC m=+0.093976944 container create bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-central-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, container_name=container-puppet-ceilometer, name=rhosp17/openstack-ceilometer-central, build-date=2025-11-19T00:11:59Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public)
Nov 23 02:54:51 localhost systemd[1]: Started libpod-conmon-bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68.scope.
Nov 23 02:54:51 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1efd3f89201e2ae0ff08eee424918b729c04d8c64dfb0844aa7373839c58b35/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:51 localhost podman[51798]: 2025-11-23 07:54:50.919307998 +0000 UTC m=+0.045705154 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Nov 23 02:54:51 localhost podman[51798]: 2025-11-23 07:54:51.028179643 +0000 UTC m=+0.154576729 container init bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=container-puppet-ceilometer, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-11-19T00:11:59Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 02:54:51 localhost podman[51798]: 2025-11-23 07:54:51.035518456 +0000 UTC m=+0.161915532 container start bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, url=https://www.redhat.com, config_id=tripleo_puppet_step1, io.openshift.expose-services=, container_name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-11-19T00:11:59Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Nov 23 02:54:51 localhost podman[51798]: 2025-11-23 07:54:51.035897516 +0000 UTC m=+0.162294632 container attach bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-central, build-date=2025-11-19T00:11:59Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, tcib_managed=true, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-central-container)
Nov 23 02:54:51 localhost puppet-user[51719]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:51 localhost puppet-user[51719]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:51 localhost puppet-user[51719]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:51 localhost puppet-user[51719]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51719]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:51 localhost puppet-user[51719]:   (file & line not available)
Nov 23 02:54:51 localhost ovs-vsctl[51976]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Nov 23 02:54:51 localhost puppet-user[51719]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.08 seconds
Nov 23 02:54:51 localhost puppet-user[51715]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:51 localhost puppet-user[51715]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:51 localhost puppet-user[51715]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:51 localhost puppet-user[51715]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51719]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Nov 23 02:54:51 localhost puppet-user[51719]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Nov 23 02:54:51 localhost puppet-user[51715]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:51 localhost puppet-user[51715]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51719]: Notice: Applied catalog in 0.04 seconds
Nov 23 02:54:51 localhost puppet-user[51719]: Application:
Nov 23 02:54:51 localhost puppet-user[51719]:   Initial environment: production
Nov 23 02:54:51 localhost puppet-user[51719]:   Converged environment: production
Nov 23 02:54:51 localhost puppet-user[51719]:         Run mode: user
Nov 23 02:54:51 localhost puppet-user[51719]: Changes:
Nov 23 02:54:51 localhost puppet-user[51719]:            Total: 2
Nov 23 02:54:51 localhost puppet-user[51719]: Events:
Nov 23 02:54:51 localhost puppet-user[51719]:          Success: 2
Nov 23 02:54:51 localhost puppet-user[51719]:            Total: 2
Nov 23 02:54:51 localhost puppet-user[51719]: Resources:
Nov 23 02:54:51 localhost puppet-user[51719]:          Changed: 2
Nov 23 02:54:51 localhost puppet-user[51719]:      Out of sync: 2
Nov 23 02:54:51 localhost puppet-user[51719]:          Skipped: 7
Nov 23 02:54:51 localhost puppet-user[51719]:            Total: 9
Nov 23 02:54:51 localhost puppet-user[51719]: Time:
Nov 23 02:54:51 localhost puppet-user[51719]:             File: 0.01
Nov 23 02:54:51 localhost puppet-user[51719]:             Cron: 0.01
Nov 23 02:54:51 localhost puppet-user[51719]:   Transaction evaluation: 0.03
Nov 23 02:54:51 localhost puppet-user[51719]:   Catalog application: 0.04
Nov 23 02:54:51 localhost puppet-user[51719]:   Config retrieval: 0.10
Nov 23 02:54:51 localhost puppet-user[51719]:         Last run: 1763884491
Nov 23 02:54:51 localhost puppet-user[51719]:            Total: 0.04
Nov 23 02:54:51 localhost puppet-user[51719]: Version:
Nov 23 02:54:51 localhost puppet-user[51719]:           Config: 1763884491
Nov 23 02:54:51 localhost puppet-user[51719]:           Puppet: 7.10.0
Nov 23 02:54:51 localhost puppet-user[51765]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:51 localhost puppet-user[51765]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:51 localhost puppet-user[51765]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:51 localhost puppet-user[51765]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51765]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:51 localhost puppet-user[51765]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51715]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.10 seconds
Nov 23 02:54:51 localhost puppet-user[51751]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:51 localhost puppet-user[51751]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:51 localhost puppet-user[51751]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:51 localhost puppet-user[51751]:   (file & line not available)
Nov 23 02:54:51 localhost puppet-user[51715]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Nov 23 02:54:51 localhost puppet-user[51715]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Nov 23 02:54:51 localhost puppet-user[51717]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:51 localhost puppet-user[51717]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:51 localhost puppet-user[51717]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:51 localhost puppet-user[51717]:   (file & line not available)
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:52 localhost puppet-user[51751]:   (file & line not available)
Nov 23 02:54:52 localhost puppet-user[51715]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Nov 23 02:54:52 localhost puppet-user[51717]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:52 localhost puppet-user[51717]:   (file & line not available)
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: Accepting previously invalid value for target type 'Integer'
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.13 seconds
Nov 23 02:54:52 localhost systemd[1]: libpod-82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b.scope: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: libpod-82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b.scope: Consumed 2.058s CPU time.
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755'
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created
Nov 23 02:54:52 localhost podman[51557]: 2025-11-23 07:54:52.162803596 +0000 UTC m=+3.289898231 container died 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, container_name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}09206c0041c6edfa14fdc9d9047625ff04f6e7bbb8451bfc98f28dab2ffc9e59'
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created
Nov 23 02:54:52 localhost puppet-user[51717]: Notice: Applied catalog in 0.03 seconds
Nov 23 02:54:52 localhost puppet-user[51717]: Application:
Nov 23 02:54:52 localhost puppet-user[51717]:   Initial environment: production
Nov 23 02:54:52 localhost puppet-user[51717]:   Converged environment: production
Nov 23 02:54:52 localhost puppet-user[51717]:         Run mode: user
Nov 23 02:54:52 localhost puppet-user[51717]: Changes:
Nov 23 02:54:52 localhost puppet-user[51717]:            Total: 7
Nov 23 02:54:52 localhost puppet-user[51717]: Events:
Nov 23 02:54:52 localhost puppet-user[51717]:          Success: 7
Nov 23 02:54:52 localhost puppet-user[51717]:            Total: 7
Nov 23 02:54:52 localhost puppet-user[51717]: Resources:
Nov 23 02:54:52 localhost puppet-user[51717]:          Skipped: 13
Nov 23 02:54:52 localhost puppet-user[51717]:          Changed: 5
Nov 23 02:54:52 localhost puppet-user[51717]:      Out of sync: 5
Nov 23 02:54:52 localhost puppet-user[51717]:            Total: 20
Nov 23 02:54:52 localhost puppet-user[51717]: Time:
Nov 23 02:54:52 localhost puppet-user[51717]:             File: 0.01
Nov 23 02:54:52 localhost puppet-user[51717]:   Transaction evaluation: 0.03
Nov 23 02:54:52 localhost puppet-user[51717]:   Catalog application: 0.03
Nov 23 02:54:52 localhost puppet-user[51717]:   Config retrieval: 0.16
Nov 23 02:54:52 localhost puppet-user[51717]:         Last run: 1763884492
Nov 23 02:54:52 localhost puppet-user[51717]:            Total: 0.03
Nov 23 02:54:52 localhost puppet-user[51717]: Version:
Nov 23 02:54:52 localhost puppet-user[51717]:           Config: 1763884491
Nov 23 02:54:52 localhost puppet-user[51717]:           Puppet: 7.10.0
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.34 seconds
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Nov 23 02:54:52 localhost puppet-user[51751]: in a future release. Use nova::cinder::os_region_name instead
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Nov 23 02:54:52 localhost puppet-user[51751]: in a future release. Use nova::cinder::catalog_info instead
Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay-ab58928ede0dd62dc1c6a47bb498643050414b3c034952943115aaac1d21638e-merged.mount: Deactivated successfully.
Nov 23 02:54:52 localhost podman[52205]: 2025-11-23 07:54:52.266838684 +0000 UTC m=+0.094686303 container cleanup 82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, release=1761123044, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 02:54:52 localhost systemd[1]: libpod-conmon-82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b.scope: Deactivated successfully.
Nov 23 02:54:52 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640'
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c'
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Nov 23 02:54:52 localhost puppet-user[51715]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34'
Nov 23 02:54:52 localhost puppet-user[51715]: Notice: Applied catalog in 0.50 seconds
Nov 23 02:54:52 localhost puppet-user[51715]: Application:
Nov 23 02:54:52 localhost puppet-user[51715]:   Initial environment: production
Nov 23 02:54:52 localhost puppet-user[51715]:   Converged environment: production
Nov 23 02:54:52 localhost puppet-user[51715]:         Run mode: user
Nov 23 02:54:52 localhost puppet-user[51715]: Changes:
Nov 23 02:54:52 localhost puppet-user[51715]:            Total: 4
Nov 23 02:54:52 localhost puppet-user[51715]: Events:
Nov 23 02:54:52 localhost puppet-user[51715]:          Success: 4
Nov 23 02:54:52 localhost puppet-user[51715]:            Total: 4
Nov 23 02:54:52 localhost puppet-user[51715]: Resources:
Nov 23 02:54:52 localhost puppet-user[51715]:          Changed: 4
Nov 23 02:54:52 localhost puppet-user[51715]:      Out of sync: 4
Nov 23 02:54:52 localhost puppet-user[51715]:          Skipped: 8
Nov 23 02:54:52 localhost puppet-user[51715]:            Total: 13
Nov 23 02:54:52 localhost puppet-user[51715]: Time:
Nov 23 02:54:52 localhost puppet-user[51715]:             File: 0.00
Nov 23 02:54:52 localhost puppet-user[51715]:             Exec: 0.05
Nov 23 02:54:52 localhost puppet-user[51715]:   Config retrieval: 0.14
Nov 23 02:54:52 localhost puppet-user[51715]:           Augeas: 0.43
Nov 23 02:54:52 localhost puppet-user[51715]:   Transaction evaluation: 0.49
Nov 23 02:54:52 localhost puppet-user[51715]:   Catalog application: 0.50
Nov 23 02:54:52 localhost puppet-user[51715]:         Last run: 1763884492
Nov 23 02:54:52 localhost puppet-user[51715]:            Total: 0.50
Nov 23 02:54:52 localhost puppet-user[51715]: Version:
Nov 23 02:54:52 localhost puppet-user[51715]:           Config: 1763884491
Nov 23 02:54:52 localhost puppet-user[51715]:           Puppet: 7.10.0
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba'
Nov 23 02:54:52 localhost systemd[1]: libpod-bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b.scope: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: libpod-bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b.scope: Consumed 2.364s CPU time.
Nov 23 02:54:52 localhost podman[51500]: 2025-11-23 07:54:52.462325069 +0000 UTC m=+3.667409395 container died bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=container-puppet-metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046'
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e'
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62'
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed
Nov 23 02:54:52 localhost puppet-user[51765]: Notice: Applied catalog in 0.25 seconds
Nov 23 02:54:52 localhost puppet-user[51765]: Application:
Nov 23 02:54:52 localhost puppet-user[51765]:   Initial environment: production
Nov 23 02:54:52 localhost puppet-user[51765]:   Converged environment: production
Nov 23 02:54:52 localhost puppet-user[51765]:         Run mode: user
Nov 23 02:54:52 localhost puppet-user[51765]: Changes:
Nov 23 02:54:52 localhost puppet-user[51765]:            Total: 43
Nov 23 02:54:52 localhost puppet-user[51765]: Events:
Nov 23 02:54:52 localhost puppet-user[51765]:          Success: 43
Nov 23 02:54:52 localhost puppet-user[51765]:            Total: 43
Nov 23 02:54:52 localhost puppet-user[51765]: Resources:
Nov 23 02:54:52 localhost puppet-user[51765]:          Skipped: 14
Nov 23 02:54:52 localhost puppet-user[51765]:          Changed: 38
Nov 23 02:54:52 localhost puppet-user[51765]:      Out of sync: 38
Nov 23 02:54:52 localhost puppet-user[51765]:            Total: 82
Nov 23 02:54:52 localhost puppet-user[51765]: Time:
Nov 23 02:54:52 localhost puppet-user[51765]:      Concat file: 0.00
Nov 23 02:54:52 localhost puppet-user[51765]:             File: 0.10
Nov 23 02:54:52 localhost puppet-user[51765]:   Transaction evaluation: 0.24
Nov 23 02:54:52 localhost puppet-user[51765]:   Catalog application: 0.25
Nov 23 02:54:52 localhost puppet-user[51765]:   Config retrieval: 0.41
Nov 23 02:54:52 localhost puppet-user[51765]:         Last run: 1763884492
Nov 23 02:54:52 localhost puppet-user[51765]:   Concat fragment: 0.00
Nov 23 02:54:52 localhost puppet-user[51765]:            Total: 0.25
Nov 23 02:54:52 localhost puppet-user[51765]: Version:
Nov 23 02:54:52 localhost puppet-user[51765]:           Config: 1763884491
Nov 23 02:54:52 localhost puppet-user[51765]:           Puppet: 7.10.0
Nov 23 02:54:52 localhost podman[52292]: 2025-11-23 07:54:52.55925246 +0000 UTC m=+0.086252141 container cleanup bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, container_name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12)
Nov 23 02:54:52 localhost systemd[1]: libpod-conmon-bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b.scope: Deactivated successfully.
Nov 23 02:54:52 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:54:52 localhost puppet-user[51751]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Nov 23 02:54:52 localhost podman[52341]: 2025-11-23 07:54:52.739440173 +0000 UTC m=+0.068284529 container create f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, config_id=tripleo_puppet_step1, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 02:54:52 localhost systemd[1]: Started libpod-conmon-f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5.scope.
Nov 23 02:54:52 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:52 localhost systemd[1]: libpod-00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be.scope: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: libpod-00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be.scope: Consumed 2.646s CPU time.
Nov 23 02:54:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:52 localhost podman[52341]: 2025-11-23 07:54:52.708648273 +0000 UTC m=+0.037492639 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Nov 23 02:54:52 localhost podman[52341]: 2025-11-23 07:54:52.840561024 +0000 UTC m=+0.169405380 container init f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, io.openshift.expose-services=, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com)
Nov 23 02:54:52 localhost podman[51530]: 2025-11-23 07:54:52.841243292 +0000 UTC m=+4.018079455 container died 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid)
Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay-21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785-merged.mount: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bba25b44764f1cf7606183740674f4850d3650aec5c24ff683aef838a272ea2b-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:52 localhost podman[52341]: 2025-11-23 07:54:52.954876243 +0000 UTC m=+0.283720639 container start f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, release=1761123044, container_name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 02:54:52 localhost podman[52341]: 2025-11-23 07:54:52.955222732 +0000 UTC m=+0.284067128 container attach f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 02:54:52 localhost systemd[1]: libpod-92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e.scope: Deactivated successfully.
Nov 23 02:54:52 localhost systemd[1]: libpod-92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e.scope: Consumed 2.568s CPU time.
Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:53 localhost podman[51558]: 2025-11-23 07:54:53.0193354 +0000 UTC m=+4.141994488 container died 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, container_name=container-puppet-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110-merged.mount: Deactivated successfully.
Nov 23 02:54:53 localhost podman[52489]: 2025-11-23 07:54:53.073898955 +0000 UTC m=+0.092006492 container cleanup 92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, container_name=container-puppet-collectd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 02:54:53 localhost systemd[1]: libpod-conmon-92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e.scope: Deactivated successfully.
Nov 23 02:54:53 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Nov 23 02:54:53 localhost podman[52437]: 2025-11-23 07:54:53.090222815 +0000 UTC m=+0.237714547 container cleanup 00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:53 localhost puppet-user[51840]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:53 localhost systemd[1]: libpod-conmon-00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be.scope: Deactivated successfully.
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:53 localhost puppet-user[51840]:   (file & line not available)
Nov 23 02:54:53 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:53 localhost puppet-user[51840]:   (file & line not available)
Nov 23 02:54:53 localhost podman[52568]: 2025-11-23 07:54:53.124413495 +0000 UTC m=+0.058747947 container create 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1761123044, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Nov 23 02:54:53 localhost systemd[1]: Started libpod-conmon-965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d.scope.
Nov 23 02:54:53 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7f1db98db621c4802b84f320b15cb4cfdf509b4923adf3a0c9430db7ca54f5b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7f1db98db621c4802b84f320b15cb4cfdf509b4923adf3a0c9430db7ca54f5b/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:53 localhost podman[52568]: 2025-11-23 07:54:53.099666683 +0000 UTC m=+0.034001135 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 02:54:53 localhost podman[52568]: 2025-11-23 07:54:53.202642904 +0000 UTC m=+0.136977376 container init 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller)
Nov 23 02:54:53 localhost podman[52568]: 2025-11-23 07:54:53.209560826 +0000 UTC m=+0.143895278 container start 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc.)
Nov 23 02:54:53 localhost podman[52568]: 2025-11-23 07:54:53.210169583 +0000 UTC m=+0.144504045 container attach 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39)
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 1.34 seconds
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36)
Nov 23 02:54:53 localhost puppet-user[51840]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26)
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.37 seconds
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}3fd4b82820ca431560a9101649124ba519ce5d6bf5755c5a232928b76e10eb6c'
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Nov 23 02:54:53 localhost puppet-user[51751]: Warning: Empty environment setting 'TLS_PASSWORD'
Nov 23 02:54:53 localhost puppet-user[51751]:   (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}bf4205704c2ce3336692c7289c134cb4f34ad9637d3b2e0917c09fb097bf6f77'
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created
Nov 23 02:54:53 localhost systemd[1]: tmp-crun.YayJPv.mount: Deactivated successfully.
Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay-86ebbfbb226c83e40f7926e5ff88fb92af0ff068958d5052a05d57da8af7f4e7-merged.mount: Deactivated successfully.
Nov 23 02:54:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92b190aeec187d0516e0680f42f033b74e4f5b0e8d297e9a51c25e8c3cf4369e-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51840]: Notice: Applied catalog in 0.41 seconds
Nov 23 02:54:53 localhost puppet-user[51840]: Application:
Nov 23 02:54:53 localhost puppet-user[51840]:   Initial environment: production
Nov 23 02:54:53 localhost puppet-user[51840]:   Converged environment: production
Nov 23 02:54:53 localhost puppet-user[51840]:         Run mode: user
Nov 23 02:54:53 localhost puppet-user[51840]: Changes:
Nov 23 02:54:53 localhost puppet-user[51840]:            Total: 31
Nov 23 02:54:53 localhost puppet-user[51840]: Events:
Nov 23 02:54:53 localhost puppet-user[51840]:          Success: 31
Nov 23 02:54:53 localhost puppet-user[51840]:            Total: 31
Nov 23 02:54:53 localhost puppet-user[51840]: Resources:
Nov 23 02:54:53 localhost puppet-user[51840]:          Skipped: 22
Nov 23 02:54:53 localhost puppet-user[51840]:          Changed: 31
Nov 23 02:54:53 localhost puppet-user[51840]:      Out of sync: 31
Nov 23 02:54:53 localhost puppet-user[51840]:            Total: 151
Nov 23 02:54:53 localhost puppet-user[51840]: Time:
Nov 23 02:54:53 localhost puppet-user[51840]:          Package: 0.02
Nov 23 02:54:53 localhost puppet-user[51840]:   Ceilometer config: 0.33
Nov 23 02:54:53 localhost puppet-user[51840]:   Transaction evaluation: 0.40
Nov 23 02:54:53 localhost puppet-user[51840]:   Catalog application: 0.41
Nov 23 02:54:53 localhost puppet-user[51840]:   Config retrieval: 0.44
Nov 23 02:54:53 localhost puppet-user[51840]:         Last run: 1763884493
Nov 23 02:54:53 localhost puppet-user[51840]:        Resources: 0.00
Nov 23 02:54:53 localhost puppet-user[51840]:            Total: 0.41
Nov 23 02:54:53 localhost puppet-user[51840]: Version:
Nov 23 02:54:53 localhost puppet-user[51840]:           Config: 1763884493
Nov 23 02:54:53 localhost puppet-user[51840]:           Puppet: 7.10.0
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Nov 23 02:54:53 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Nov 23 02:54:54 localhost systemd[1]: libpod-bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68.scope: Deactivated successfully.
Nov 23 02:54:54 localhost systemd[1]: libpod-bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68.scope: Consumed 2.994s CPU time.
Nov 23 02:54:54 localhost podman[51798]: 2025-11-23 07:54:54.389144682 +0000 UTC m=+3.515541798 container died bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-11-19T00:11:59Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, container_name=container-puppet-ceilometer, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Nov 23 02:54:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:54 localhost systemd[1]: var-lib-containers-storage-overlay-f1efd3f89201e2ae0ff08eee424918b729c04d8c64dfb0844aa7373839c58b35-merged.mount: Deactivated successfully.
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Nov 23 02:54:54 localhost podman[52712]: 2025-11-23 07:54:54.499109567 +0000 UTC m=+0.096004489 container cleanup bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:59Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-central-container, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com)
Nov 23 02:54:54 localhost systemd[1]: libpod-conmon-bbb39529ad0d0ea854daeef212818f88c7156a136033b283b9cb6638134b5a68.scope: Deactivated successfully.
Nov 23 02:54:54 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Nov 23 02:54:54 localhost puppet-user[52488]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:54 localhost puppet-user[52488]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:54 localhost puppet-user[52488]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:54 localhost puppet-user[52488]:   (file & line not available)
Nov 23 02:54:54 localhost puppet-user[52488]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:54 localhost puppet-user[52488]:   (file & line not available)
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Nov 23 02:54:54 localhost puppet-user[52488]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.22 seconds
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Nov 23 02:54:54 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52488]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2'
Nov 23 02:54:55 localhost puppet-user[52645]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:55 localhost puppet-user[52645]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:55 localhost puppet-user[52645]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:55 localhost puppet-user[52645]:   (file & line not available)
Nov 23 02:54:55 localhost puppet-user[52488]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b'
Nov 23 02:54:55 localhost puppet-user[52488]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}34923836f6bdacca7deca81fb8e88fba8264a7321573338ad2a8f8daab698a4a'
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52488]: Notice: Applied catalog in 0.10 seconds
Nov 23 02:54:55 localhost puppet-user[52488]: Application:
Nov 23 02:54:55 localhost puppet-user[52488]:   Initial environment: production
Nov 23 02:54:55 localhost puppet-user[52488]:   Converged environment: production
Nov 23 02:54:55 localhost puppet-user[52488]:         Run mode: user
Nov 23 02:54:55 localhost puppet-user[52488]: Changes:
Nov 23 02:54:55 localhost puppet-user[52488]:            Total: 3
Nov 23 02:54:55 localhost puppet-user[52488]: Events:
Nov 23 02:54:55 localhost puppet-user[52488]:          Success: 3
Nov 23 02:54:55 localhost puppet-user[52488]:            Total: 3
Nov 23 02:54:55 localhost puppet-user[52488]: Resources:
Nov 23 02:54:55 localhost puppet-user[52488]:          Skipped: 11
Nov 23 02:54:55 localhost puppet-user[52488]:          Changed: 3
Nov 23 02:54:55 localhost puppet-user[52488]:      Out of sync: 3
Nov 23 02:54:55 localhost puppet-user[52488]:            Total: 25
Nov 23 02:54:55 localhost puppet-user[52488]: Time:
Nov 23 02:54:55 localhost puppet-user[52488]:      Concat file: 0.00
Nov 23 02:54:55 localhost puppet-user[52488]:   Concat fragment: 0.00
Nov 23 02:54:55 localhost puppet-user[52488]:             File: 0.01
Nov 23 02:54:55 localhost puppet-user[52488]:   Transaction evaluation: 0.10
Nov 23 02:54:55 localhost puppet-user[52488]:   Catalog application: 0.10
Nov 23 02:54:55 localhost puppet-user[52488]:   Config retrieval: 0.27
Nov 23 02:54:55 localhost puppet-user[52488]:         Last run: 1763884495
Nov 23 02:54:55 localhost puppet-user[52488]:            Total: 0.10
Nov 23 02:54:55 localhost puppet-user[52488]: Version:
Nov 23 02:54:55 localhost puppet-user[52488]:           Config: 1763884494
Nov 23 02:54:55 localhost puppet-user[52488]:           Puppet: 7.10.0
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52645]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:55 localhost puppet-user[52645]:   (file & line not available)
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.27 seconds
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Nov 23 02:54:55 localhost systemd[1]: libpod-f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5.scope: Deactivated successfully.
Nov 23 02:54:55 localhost systemd[1]: libpod-f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5.scope: Consumed 2.362s CPU time.
Nov 23 02:54:55 localhost ovs-vsctl[52917]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642
Nov 23 02:54:55 localhost podman[52341]: 2025-11-23 07:54:55.392252504 +0000 UTC m=+2.721096950 container died f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_puppet_step1, distribution-scope=public, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1)
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52931]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}7457979272b158ac88adf13552cc58cb87586b19a7b8e2158301712e847fdf72'
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52933]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Nov 23 02:54:55 localhost systemd[1]: tmp-crun.y33iMh.mount: Deactivated successfully.
Nov 23 02:54:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:55 localhost systemd[1]: var-lib-containers-storage-overlay-e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a-merged.mount: Deactivated successfully.
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52942]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005532586.localdomain
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005532586.novalocal' to 'np0005532586.localdomain'
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52944]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Nov 23 02:54:55 localhost podman[52924]: 2025-11-23 07:54:55.544159342 +0000 UTC m=+0.138216039 container cleanup f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, container_name=container-puppet-rsyslog, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com)
Nov 23 02:54:55 localhost ovs-vsctl[52946]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52949]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52962]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52964]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52966]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52968]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:1a:d5:cf
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52970]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52972]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Nov 23 02:54:55 localhost ovs-vsctl[52974]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Nov 23 02:54:55 localhost puppet-user[52645]: Notice: Applied catalog in 0.42 seconds
Nov 23 02:54:55 localhost puppet-user[52645]: Application:
Nov 23 02:54:55 localhost puppet-user[52645]:   Initial environment: production
Nov 23 02:54:55 localhost puppet-user[52645]:   Converged environment: production
Nov 23 02:54:55 localhost puppet-user[52645]:         Run mode: user
Nov 23 02:54:55 localhost puppet-user[52645]: Changes:
Nov 23 02:54:55 localhost puppet-user[52645]:            Total: 14
Nov 23 02:54:55 localhost puppet-user[52645]: Events:
Nov 23 02:54:55 localhost puppet-user[52645]:          Success: 14
Nov 23 02:54:55 localhost puppet-user[52645]:            Total: 14
Nov 23 02:54:55 localhost puppet-user[52645]: Resources:
Nov 23 02:54:55 localhost puppet-user[52645]:          Skipped: 12
Nov 23 02:54:55 localhost puppet-user[52645]:          Changed: 14
Nov 23 02:54:55 localhost puppet-user[52645]:      Out of sync: 14
Nov 23 02:54:55 localhost puppet-user[52645]:            Total: 29
Nov 23 02:54:55 localhost puppet-user[52645]: Time:
Nov 23 02:54:55 localhost puppet-user[52645]:             Exec: 0.01
Nov 23 02:54:55 localhost puppet-user[52645]:   Config retrieval: 0.30
Nov 23 02:54:55 localhost puppet-user[52645]:        Vs config: 0.36
Nov 23 02:54:55 localhost puppet-user[52645]:   Transaction evaluation: 0.41
Nov 23 02:54:55 localhost puppet-user[52645]:   Catalog application: 0.42
Nov 23 02:54:55 localhost puppet-user[52645]:         Last run: 1763884495
Nov 23 02:54:55 localhost puppet-user[52645]:            Total: 0.43
Nov 23 02:54:55 localhost puppet-user[52645]: Version:
Nov 23 02:54:55 localhost puppet-user[52645]:           Config: 1763884495
Nov 23 02:54:55 localhost puppet-user[52645]:           Puppet: 7.10.0
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Nov 23 02:54:55 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Nov 23 02:54:56 localhost systemd[1]: libpod-conmon-f55c841c087425c5bea20795c3f67e1241f2161e524e28e240d4097e15bceec5.scope: Deactivated successfully.
Nov 23 02:54:56 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Nov 23 02:54:56 localhost podman[52651]: 2025-11-23 07:54:53.320062355 +0000 UTC m=+0.035297701 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Nov 23 02:54:56 localhost systemd[1]: libpod-965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d.scope: Deactivated successfully.
Nov 23 02:54:56 localhost systemd[1]: libpod-965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d.scope: Consumed 2.842s CPU time.
Nov 23 02:54:56 localhost podman[52568]: 2025-11-23 07:54:56.212352558 +0000 UTC m=+3.146687030 container died 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller)
Nov 23 02:54:56 localhost systemd[1]: var-lib-containers-storage-overlay-e7f1db98db621c4802b84f320b15cb4cfdf509b4923adf3a0c9430db7ca54f5b-merged.mount: Deactivated successfully.
Nov 23 02:54:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:56 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Nov 23 02:54:56 localhost podman[53044]: 2025-11-23 07:54:56.643683341 +0000 UTC m=+0.420251962 container cleanup 965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, container_name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 02:54:56 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 02:54:56 localhost systemd[1]: libpod-conmon-965b1b1523d9a5a296c2917bf0404137aef626b15874a3686fcacdadd7c23d4d.scope: Deactivated successfully.
Nov 23 02:54:56 localhost podman[53071]: 2025-11-23 07:54:56.696280035 +0000 UTC m=+0.430370778 container create 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, release=1761123044)
Nov 23 02:54:56 localhost podman[53071]: 2025-11-23 07:54:56.300213501 +0000 UTC m=+0.034304234 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Nov 23 02:54:56 localhost systemd[1]: Started libpod-conmon-9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f.scope.
Nov 23 02:54:56 localhost systemd[1]: Started libcrun container.
Nov 23 02:54:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e923447bbcace1356863835c0089a8ad58eff9a2f791c2262e7c0fcdcbc23235/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Nov 23 02:54:56 localhost podman[53071]: 2025-11-23 07:54:56.769238726 +0000 UTC m=+0.503329469 container init 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, build-date=2025-11-19T00:23:27Z, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=container-puppet-neutron, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-server)
Nov 23 02:54:56 localhost podman[53071]: 2025-11-23 07:54:56.777033471 +0000 UTC m=+0.511124214 container start 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, io.openshift.expose-services=, container_name=container-puppet-neutron, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z)
Nov 23 02:54:56 localhost podman[53071]: 2025-11-23 07:54:56.777293568 +0000 UTC m=+0.511384341 container attach 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, container_name=container-puppet-neutron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z)
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Nov 23 02:54:57 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4'
Nov 23 02:54:58 localhost puppet-user[51751]: Notice: Applied catalog in 4.76 seconds
Nov 23 02:54:58 localhost puppet-user[51751]: Application:
Nov 23 02:54:58 localhost puppet-user[51751]:   Initial environment: production
Nov 23 02:54:58 localhost puppet-user[51751]:   Converged environment: production
Nov 23 02:54:58 localhost puppet-user[51751]:         Run mode: user
Nov 23 02:54:58 localhost puppet-user[51751]: Changes:
Nov 23 02:54:58 localhost puppet-user[51751]:            Total: 183
Nov 23 02:54:58 localhost puppet-user[51751]: Events:
Nov 23 02:54:58 localhost puppet-user[51751]:          Success: 183
Nov 23 02:54:58 localhost puppet-user[51751]:            Total: 183
Nov 23 02:54:58 localhost puppet-user[51751]: Resources:
Nov 23 02:54:58 localhost puppet-user[51751]:          Changed: 183
Nov 23 02:54:58 localhost puppet-user[51751]:      Out of sync: 183
Nov 23 02:54:58 localhost puppet-user[51751]:          Skipped: 57
Nov 23 02:54:58 localhost puppet-user[51751]:            Total: 487
Nov 23 02:54:58 localhost puppet-user[51751]: Time:
Nov 23 02:54:58 localhost puppet-user[51751]:      Concat file: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:   Concat fragment: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:           Anchor: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:        File line: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtlogd config: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtstoraged config: 0.01
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtnodedevd config: 0.02
Nov 23 02:54:58 localhost puppet-user[51751]:             Exec: 0.02
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtqemud config: 0.03
Nov 23 02:54:58 localhost puppet-user[51751]:          Package: 0.03
Nov 23 02:54:58 localhost puppet-user[51751]:             File: 0.03
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtproxyd config: 0.03
Nov 23 02:54:58 localhost puppet-user[51751]:   Virtsecretd config: 0.03
Nov 23 02:54:58 localhost puppet-user[51751]:           Augeas: 1.13
Nov 23 02:54:58 localhost puppet-user[51751]:   Config retrieval: 1.59
Nov 23 02:54:58 localhost puppet-user[51751]:         Last run: 1763884498
Nov 23 02:54:58 localhost puppet-user[51751]:      Nova config: 3.20
Nov 23 02:54:58 localhost puppet-user[51751]:   Transaction evaluation: 4.74
Nov 23 02:54:58 localhost puppet-user[51751]:   Catalog application: 4.76
Nov 23 02:54:58 localhost puppet-user[51751]:        Resources: 0.00
Nov 23 02:54:58 localhost puppet-user[51751]:            Total: 4.76
Nov 23 02:54:58 localhost puppet-user[51751]: Version:
Nov 23 02:54:58 localhost puppet-user[51751]:           Config: 1763884491
Nov 23 02:54:58 localhost puppet-user[51751]:           Puppet: 7.10.0
Nov 23 02:54:58 localhost puppet-user[53126]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Nov 23 02:54:58 localhost puppet-user[53126]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 02:54:58 localhost puppet-user[53126]:   (file: /etc/puppet/hiera.yaml)
Nov 23 02:54:58 localhost puppet-user[53126]: Warning: Undefined variable '::deploy_config_name';
Nov 23 02:54:58 localhost puppet-user[53126]:   (file & line not available)
Nov 23 02:54:58 localhost puppet-user[53126]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 02:54:58 localhost puppet-user[53126]:   (file & line not available)
Nov 23 02:54:58 localhost puppet-user[53126]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37)
Nov 23 02:54:59 localhost systemd[1]: libpod-3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0.scope: Deactivated successfully.
Nov 23 02:54:59 localhost systemd[1]: libpod-3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0.scope: Consumed 8.876s CPU time.
Nov 23 02:54:59 localhost podman[53238]: 2025-11-23 07:54:59.272484821 +0000 UTC m=+0.036779829 container died 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true)
Nov 23 02:54:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0-userdata-shm.mount: Deactivated successfully.
Nov 23 02:54:59 localhost systemd[1]: var-lib-containers-storage-overlay-870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9-merged.mount: Deactivated successfully.
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.66 seconds
Nov 23 02:54:59 localhost podman[53238]: 2025-11-23 07:54:59.392976602 +0000 UTC m=+0.157271580 container cleanup 3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=container-puppet-nova_libvirt, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Nov 23 02:54:59 localhost systemd[1]: libpod-conmon-3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0.scope: Deactivated successfully.
Nov 23 02:54:59 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Nov 23 02:54:59 localhost puppet-user[53126]: Notice: Applied catalog in 0.44 seconds
Nov 23 02:54:59 localhost puppet-user[53126]: Application:
Nov 23 02:54:59 localhost puppet-user[53126]:   Initial environment: production
Nov 23 02:54:59 localhost puppet-user[53126]:   Converged environment: production
Nov 23 02:54:59 localhost puppet-user[53126]:         Run mode: user
Nov 23 02:54:59 localhost puppet-user[53126]: Changes:
Nov 23 02:54:59 localhost puppet-user[53126]:            Total: 33
Nov 23 02:54:59 localhost puppet-user[53126]: Events:
Nov 23 02:54:59 localhost puppet-user[53126]:          Success: 33
Nov 23 02:54:59 localhost puppet-user[53126]:            Total: 33
Nov 23 02:54:59 localhost puppet-user[53126]: Resources:
Nov 23 02:54:59 localhost puppet-user[53126]:          Skipped: 21
Nov 23 02:54:59 localhost puppet-user[53126]:          Changed: 33
Nov 23 02:54:59 localhost puppet-user[53126]:      Out of sync: 33
Nov 23 02:54:59 localhost puppet-user[53126]:            Total: 155
Nov 23 02:54:59 localhost puppet-user[53126]: Time:
Nov 23 02:54:59 localhost puppet-user[53126]:        Resources: 0.00
Nov 23 02:54:59 localhost puppet-user[53126]:   Ovn metadata agent config: 0.02
Nov 23 02:54:59 localhost puppet-user[53126]:   Neutron config: 0.37
Nov 23 02:54:59 localhost puppet-user[53126]:   Transaction evaluation: 0.44
Nov 23 02:54:59 localhost puppet-user[53126]:   Catalog application: 0.44
Nov 23 02:54:59 localhost puppet-user[53126]:   Config retrieval: 0.74
Nov 23 02:54:59 localhost puppet-user[53126]:         Last run: 1763884499
Nov 23 02:54:59 localhost puppet-user[53126]:            Total: 0.44
Nov 23 02:54:59 localhost puppet-user[53126]: Version:
Nov 23 02:54:59 localhost puppet-user[53126]:           Config: 1763884498
Nov 23 02:54:59 localhost puppet-user[53126]:           Puppet: 7.10.0
Nov 23 02:55:00 localhost systemd[1]: libpod-9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f.scope: Deactivated successfully.
Nov 23 02:55:00 localhost systemd[1]: libpod-9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f.scope: Consumed 3.629s CPU time.
Nov 23 02:55:00 localhost podman[53071]: 2025-11-23 07:55:00.405118851 +0000 UTC m=+4.139209574 container died 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:23:27Z, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, distribution-scope=public, release=1761123044, config_id=tripleo_puppet_step1, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 02:55:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f-userdata-shm.mount: Deactivated successfully.
Nov 23 02:55:00 localhost systemd[1]: var-lib-containers-storage-overlay-e923447bbcace1356863835c0089a8ad58eff9a2f791c2262e7c0fcdcbc23235-merged.mount: Deactivated successfully.
Nov 23 02:55:00 localhost podman[53310]: 2025-11-23 07:55:00.552749587 +0000 UTC m=+0.137987264 container cleanup 9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-server, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, container_name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, com.redhat.component=openstack-neutron-server-container)
Nov 23 02:55:00 localhost systemd[1]: libpod-conmon-9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f.scope: Deactivated successfully.
Nov 23 02:55:00 localhost python3[51339]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005532586 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005532586', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Nov 23 02:55:01 localhost python3[53362]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:02 localhost python3[53394]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:55:03 localhost python3[53444]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:03 localhost python3[53487]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884502.8177655-84579-229998868267857/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:04 localhost python3[53549]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:04 localhost python3[53592]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884503.717182-84579-224617197711310/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:04 localhost python3[53654]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:05 localhost python3[53697]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884504.6653974-84718-14173032369840/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:05 localhost python3[53759]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:06 localhost python3[53802]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884505.5949585-84755-269390658253841/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:06 localhost python3[53832]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:55:06 localhost systemd[1]: Reloading.
Nov 23 02:55:06 localhost systemd-sysv-generator[53855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:06 localhost systemd-rc-local-generator[53849]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:07 localhost systemd[1]: Reloading.
Nov 23 02:55:07 localhost systemd-rc-local-generator[53894]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:07 localhost systemd-sysv-generator[53898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:07 localhost systemd[1]: Starting TripleO Container Shutdown...
Nov 23 02:55:07 localhost systemd[1]: Finished TripleO Container Shutdown.
Nov 23 02:55:07 localhost python3[53955]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:08 localhost python3[53998]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884507.4963622-84797-221744223535527/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:08 localhost python3[54060]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 02:55:08 localhost python3[54103]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884508.362745-84820-197382264730414/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:09 localhost python3[54133]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:55:09 localhost systemd[1]: Reloading.
Nov 23 02:55:09 localhost systemd-rc-local-generator[54156]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:09 localhost systemd-sysv-generator[54161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:09 localhost systemd[1]: Reloading.
Nov 23 02:55:09 localhost systemd-rc-local-generator[54193]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:09 localhost systemd-sysv-generator[54198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:10 localhost systemd[1]: Starting Create netns directory...
Nov 23 02:55:10 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 02:55:10 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 02:55:10 localhost systemd[1]: Finished Create netns directory.
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 3ca07ad6f1308e0f483a5c84bda3f5ec
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 83ab5b37680071f0941108e43c518cc1
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 3ea58633c99f05090f3faea662c628ca
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: da5facbcd2df03440dc3d35420cadd63
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: da5facbcd2df03440dc3d35420cadd63
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 8ff67c95922a0236a1e9ce0694abb49c
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:10 localhost python3[54226]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 54a97af4633bfad00758ecf55e783ce2
Nov 23 02:55:12 localhost python3[54285]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.602027557 +0000 UTC m=+0.078561979 container create 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Nov 23 02:55:13 localhost systemd[1]: Started libpod-conmon-2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370.scope.
Nov 23 02:55:13 localhost systemd[1]: Started libcrun container.
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.558638544 +0000 UTC m=+0.035172986 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:55:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/572e34444311607b9314c97442135b544356d8a95d71aa7adf26ce39fbf50aaa/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.668812205 +0000 UTC m=+0.145346627 container init 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.679508957 +0000 UTC m=+0.156043369 container start 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, release=1761123044, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.680065931 +0000 UTC m=+0.156600393 container attach 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 02:55:13 localhost systemd[1]: libpod-2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370.scope: Deactivated successfully.
Nov 23 02:55:13 localhost podman[54324]: 2025-11-23 07:55:13.68728177 +0000 UTC m=+0.163816212 container died 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr_init_logs, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64)
Nov 23 02:55:13 localhost podman[54343]: 2025-11-23 07:55:13.777090944 +0000 UTC m=+0.076109484 container cleanup 2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:55:13 localhost systemd[1]: libpod-conmon-2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370.scope: Deactivated successfully.
Nov 23 02:55:13 localhost python3[54285]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd
Nov 23 02:55:14 localhost podman[54416]: 2025-11-23 07:55:14.205340195 +0000 UTC m=+0.090542343 container create c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:55:14 localhost systemd[1]: Started libpod-conmon-c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.scope.
Nov 23 02:55:14 localhost systemd[1]: Started libcrun container.
Nov 23 02:55:14 localhost podman[54416]: 2025-11-23 07:55:14.160805953 +0000 UTC m=+0.046008151 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:55:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4a8a7d9871b00def826b947fe67563fec7276b8de017c820e96afd9bc15049/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Nov 23 02:55:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e4a8a7d9871b00def826b947fe67563fec7276b8de017c820e96afd9bc15049/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff)
Nov 23 02:55:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:55:14 localhost podman[54416]: 2025-11-23 07:55:14.298313312 +0000 UTC m=+0.183515560 container init c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, vendor=Red Hat, Inc.)
Nov 23 02:55:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:55:14 localhost podman[54416]: 2025-11-23 07:55:14.329454613 +0000 UTC m=+0.214656761 container start c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.)
Nov 23 02:55:14 localhost python3[54285]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=3ca07ad6f1308e0f483a5c84bda3f5ec --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Nov 23 02:55:14 localhost podman[54438]: 2025-11-23 07:55:14.481188446 +0000 UTC m=+0.143394936 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Nov 23 02:55:14 localhost systemd[1]: var-lib-containers-storage-overlay-572e34444311607b9314c97442135b544356d8a95d71aa7adf26ce39fbf50aaa-merged.mount: Deactivated successfully.
Nov 23 02:55:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ab01c9175c376d55302d62c0a8afd71687cd1fd31336fd479b5a5dc50595370-userdata-shm.mount: Deactivated successfully.
Nov 23 02:55:14 localhost podman[54438]: 2025-11-23 07:55:14.696891093 +0000 UTC m=+0.359097563 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Nov 23 02:55:14 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:55:14 localhost python3[54511]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:15 localhost python3[54527]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 02:55:15 localhost python3[54588]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884515.2222147-85042-64931772775503/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:16 localhost python3[54604]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 02:55:16 localhost systemd[1]: Reloading.
Nov 23 02:55:16 localhost systemd-rc-local-generator[54629]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:16 localhost systemd-sysv-generator[54632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:17 localhost python3[54656]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 02:55:18 localhost systemd[1]: Reloading.
Nov 23 02:55:18 localhost systemd-rc-local-generator[54679]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 02:55:18 localhost systemd-sysv-generator[54682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 02:55:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 02:55:18 localhost systemd[1]: Starting metrics_qdr container...
Nov 23 02:55:18 localhost systemd[1]: Started metrics_qdr container.
Nov 23 02:55:19 localhost python3[54736]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:20 localhost python3[54857]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005532586 step=1 update_config_hash_only=False
Nov 23 02:55:20 localhost python3[54873]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 02:55:21 localhost python3[54889]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 02:55:33 localhost sshd[54890]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:55:45 localhost podman[54892]: 2025-11-23 07:55:45.205907412 +0000 UTC m=+0.104138251 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:55:45 localhost podman[54892]: 2025-11-23 07:55:45.410997961 +0000 UTC m=+0.309228780 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 02:55:45 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:56:16 localhost systemd[1]: tmp-crun.a6nJae.mount: Deactivated successfully.
Nov 23 02:56:16 localhost podman[54999]: 2025-11-23 07:56:16.18003655 +0000 UTC m=+0.080588033 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044)
Nov 23 02:56:16 localhost podman[54999]: 2025-11-23 07:56:16.404168529 +0000 UTC m=+0.304720022 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:56:16 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:56:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:56:47 localhost podman[55027]: 2025-11-23 07:56:47.189140614 +0000 UTC m=+0.091101993 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1)
Nov 23 02:56:47 localhost podman[55027]: 2025-11-23 07:56:47.354057021 +0000 UTC m=+0.256018400 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Nov 23 02:56:47 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:57:15 localhost sshd[55134]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:57:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:57:18 localhost podman[55136]: 2025-11-23 07:57:18.14301039 +0000 UTC m=+0.078267590 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 02:57:18 localhost podman[55136]: 2025-11-23 07:57:18.297369241 +0000 UTC m=+0.232626391 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true)
Nov 23 02:57:18 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:57:26 localhost sshd[55167]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:57:26 localhost sshd[55168]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:57:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:57:49 localhost podman[55169]: 2025-11-23 07:57:49.178068311 +0000 UTC m=+0.080435926 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 02:57:49 localhost podman[55169]: 2025-11-23 07:57:49.362377022 +0000 UTC m=+0.264744657 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 02:57:49 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:58:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:58:20 localhost podman[55276]: 2025-11-23 07:58:20.179962705 +0000 UTC m=+0.084395400 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z)
Nov 23 02:58:20 localhost podman[55276]: 2025-11-23 07:58:20.375984381 +0000 UTC m=+0.280417066 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Nov 23 02:58:20 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:58:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:58:51 localhost podman[55306]: 2025-11-23 07:58:51.175502476 +0000 UTC m=+0.083743953 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 02:58:51 localhost podman[55306]: 2025-11-23 07:58:51.395060206 +0000 UTC m=+0.303301633 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, release=1761123044, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team)
Nov 23 02:58:51 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:59:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:59:22 localhost systemd[1]: tmp-crun.BrfGCr.mount: Deactivated successfully.
Nov 23 02:59:22 localhost podman[55412]: 2025-11-23 07:59:22.183296816 +0000 UTC m=+0.090140889 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 02:59:22 localhost podman[55412]: 2025-11-23 07:59:22.363998984 +0000 UTC m=+0.270843057 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=metrics_qdr, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4)
Nov 23 02:59:22 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 02:59:24 localhost sshd[55441]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 02:59:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 02:59:53 localhost systemd[1]: tmp-crun.n7O68Z.mount: Deactivated successfully.
Nov 23 02:59:53 localhost podman[55443]: 2025-11-23 07:59:53.161627778 +0000 UTC m=+0.073495596 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 02:59:53 localhost podman[55443]: 2025-11-23 07:59:53.354828541 +0000 UTC m=+0.266696339 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr)
Nov 23 02:59:53 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:00:01 localhost ceph-osd[31668]: osd.1 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2,1,3] r=1 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:02 localhost ceph-osd[31668]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1,2,0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:04 localhost ceph-osd[31668]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1,2,0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:05 localhost ceph-osd[31668]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [3,5,1] r=2 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:06 localhost ceph-osd[32615]: osd.4 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,3,2] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:07 localhost ceph-osd[32615]: osd.4 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,3,2] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:23 localhost ceph-osd[31668]: osd.1 pg_epoch: 31 pg[6.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [0,5,1] r=2 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:00:24 localhost podman[55549]: 2025-11-23 08:00:24.187144348 +0000 UTC m=+0.091565136 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z)
Nov 23 03:00:24 localhost podman[55549]: 2025-11-23 08:00:24.399639754 +0000 UTC m=+0.304060522 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true)
Nov 23 03:00:24 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:00:25 localhost ceph-osd[31668]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [5,1,3] r=1 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:33 localhost ceph-osd[31668]: osd.1 pg_epoch: 38 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=8.037537575s) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 active pruub 1119.776123047s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,3], acting [2,1,3] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:33 localhost ceph-osd[31668]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=10.729206085s) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active pruub 1122.468017578s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,0], acting [1,2,0] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:33 localhost ceph-osd[31668]: osd.1 pg_epoch: 38 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=38 pruub=8.034580231s) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.776123047s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:33 localhost ceph-osd[31668]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38 pruub=10.729206085s) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.468017578s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.19( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.8( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.7( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.3( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.2( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.1( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.c( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.e( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.f( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.b( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.10( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.11( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.12( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.13( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.15( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.16( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.18( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=21/22 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[2.14( empty local-lis/les=19/20 n=0 ec=38/19 lis/c=19/19 les/c/f=20/20/0 sis=38) [2,1,3] r=1 lpr=38 pi=[19,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:34 localhost ceph-osd[31668]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=21/21 les/c/f=22/22/0 sis=38) [1,2,0] r=0 lpr=38 pi=[21,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:35 localhost ceph-osd[31668]: osd.1 pg_epoch: 40 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=10.230142593s) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 active pruub 1123.981201172s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:35 localhost ceph-osd[31668]: osd.1 pg_epoch: 40 pg[4.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=40 pruub=10.225845337s) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.981201172s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:35 localhost ceph-osd[32615]: osd.4 pg_epoch: 40 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.411870003s) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active pruub 1121.778198242s@ mbc={}] start_peering_interval up [4,3,2] -> [4,3,2], acting [4,3,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:35 localhost ceph-osd[32615]: osd.4 pg_epoch: 40 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40 pruub=12.411870003s) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown pruub 1121.778198242s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.17( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.16( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.12( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.13( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.15( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.14( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.10( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.11( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.4( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.8( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.9( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1d( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1c( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.7( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1a( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.6( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.5( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.18( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.2( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.3( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1b( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.f( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.e( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.19( empty local-lis/les=25/26 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.19( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.3( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.6( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.c( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.b( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.17( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1a( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.13( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.11( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.1e( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.18( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[31668]: osd.1 pg_epoch: 41 pg[4.16( empty local-lis/les=23/24 n=0 ec=40/23 lis/c=23/23 les/c/f=24/24/0 sis=40) [3,5,1] r=2 lpr=40 pi=[23,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:36 localhost ceph-osd[32615]: osd.4 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=25/25 les/c/f=26/26/0 sis=40) [4,3,2] r=0 lpr=40 pi=[25,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:37 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Nov 23 03:00:37 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Nov 23 03:00:37 localhost ceph-osd[31668]: osd.1 pg_epoch: 42 pg[7.0( v 35'39 (0'0,35'39] local-lis/les=33/34 n=22 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=42 pruub=12.367185593s) [5,1,3] r=1 lpr=42 pi=[33,42)/1 luod=0'0 lua=35'37 crt=35'39 lcod 35'38 mlcod 0'0 active pruub 1128.206909180s@ mbc={}] start_peering_interval up [5,1,3] -> [5,1,3], acting [5,1,3] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:37 localhost ceph-osd[31668]: osd.1 pg_epoch: 42 pg[7.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=42 pruub=12.363790512s) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 lcod 35'38 mlcod 0'0 unknown NOTIFY pruub 1128.206909180s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:37 localhost ceph-osd[31668]: osd.1 pg_epoch: 42 pg[6.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=42 pruub=10.336079597s) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 active pruub 1126.181640625s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,1], acting [0,5,1] -> [0,5,1], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:37 localhost ceph-osd[31668]: osd.1 pg_epoch: 42 pg[6.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=42 pruub=10.332961082s) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.181640625s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:37 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Nov 23 03:00:37 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Nov 23 03:00:38 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 23 03:00:38 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.13( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.11( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.10( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.12( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.17( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.16( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.15( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.14( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.9( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.8( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.f( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.4( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1c( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.c( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1d( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1e( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1f( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.19( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.18( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1b( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[6.1a( empty local-lis/les=31/32 n=0 ec=42/31 lis/c=31/31 les/c/f=32/32/0 sis=42) [0,5,1] r=2 lpr=42 pi=[31,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=1 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:38 localhost ceph-osd[31668]: osd.1 pg_epoch: 43 pg[7.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=33/34 n=2 ec=42/33 lis/c=33/33 les/c/f=34/34/0 sis=42) [5,1,3] r=1 lpr=42 pi=[33,42)/1 crt=35'39 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:39 localhost sshd[55623]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:00:40 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts
Nov 23 03:00:40 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok
Nov 23 03:00:41 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Nov 23 03:00:41 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Nov 23 03:00:42 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 23 03:00:42 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.12( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [4,2,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.1a( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.1e( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [4,5,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.852419853s) [0,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391845703s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,4], acting [4,3,2] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.852332115s) [0,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391845703s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.839920044s) [2,4,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.379516602s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,3], acting [4,3,2] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851928711s) [5,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391723633s@ mbc={}] start_peering_interval up [4,3,2] -> [5,4,0], acting [4,3,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850914955s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390869141s@ mbc={}] start_peering_interval up [4,3,2] -> [0,1,2], acting [4,3,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.839604378s) [2,4,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.379516602s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851713181s) [5,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391723633s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850711823s) [0,1,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.390869141s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851778984s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392089844s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851438522s) [5,3,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391723633s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,4], acting [4,3,2] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851696968s) [3,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391967773s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851364136s) [5,3,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391723633s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851524353s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391967773s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851591110s) [3,5,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392089844s@ mbc={}] start_peering_interval up [4,3,2] -> [3,5,4], acting [4,3,2] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850620270s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391235352s@ mbc={}] start_peering_interval up [4,3,2] -> [1,0,5], acting [4,3,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851669312s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392089844s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851150513s) [0,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392456055s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,1], acting [4,3,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850731850s) [3,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391967773s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.851079941s) [0,2,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392456055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850619316s) [3,5,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392089844s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850570679s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391967773s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849701881s) [1,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391235352s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850021362s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391845703s@ mbc={}] start_peering_interval up [4,3,2] -> [2,3,1], acting [4,3,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850183487s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391967773s@ mbc={}] start_peering_interval up [4,3,2] -> [5,0,1], acting [4,3,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849914551s) [5,0,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391967773s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849044800s) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391357422s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849982262s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392089844s@ mbc={}] start_peering_interval up [4,3,2] -> [2,1,3], acting [4,3,2] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849044800s) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.391357422s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849225044s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391723633s@ mbc={}] start_peering_interval up [4,3,2] -> [1,3,5], acting [4,3,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849279404s) [2,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391845703s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849047661s) [1,3,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391723633s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849690437s) [2,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392089844s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848052979s) [3,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391113281s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849156380s) [2,0,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392211914s@ mbc={}] start_peering_interval up [4,3,2] -> [2,0,4], acting [4,3,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847970009s) [3,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391113281s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848526955s) [4,5,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391723633s@ mbc={}] start_peering_interval up [4,3,2] -> [4,5,0], acting [4,3,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848526955s) [4,5,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.391723633s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848186493s) [4,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391479492s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,2], acting [4,3,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848520279s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391845703s@ mbc={}] start_peering_interval up [4,3,2] -> [5,1,3], acting [4,3,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849073410s) [2,0,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392211914s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848452568s) [5,1,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391845703s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848186493s) [4,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.391479492s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846900940s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390502930s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847921371s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391601562s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846856117s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.390502930s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826606750s) [0,2,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.760864258s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,1], acting [1,2,0] -> [0,2,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826535225s) [0,2,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.760864258s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928215027s) [4,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.862792969s@ mbc={}] start_peering_interval up [0,5,1] -> [4,2,0], acting [0,5,1] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928181648s) [4,2,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927706718s) [0,1,2] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.862792969s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,2], acting [0,5,1] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847799301s) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391601562s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846662521s) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390502930s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846662521s) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.390502930s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846599579s) [3,4,5] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390747070s@ mbc={}] start_peering_interval up [4,3,2] -> [3,4,5], acting [4,3,2] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846509933s) [3,4,5] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.390747070s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847429276s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391723633s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847368240s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391723633s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846905708s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.391357422s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846630096s) [5,3,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.391357422s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846239090s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390869141s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847489357s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392456055s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847416878s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392456055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927677155s) [0,1,2] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826341629s) [0,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761596680s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826307297s) [0,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761596680s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927456856s) [0,5,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.862915039s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,4], acting [0,5,1] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927431107s) [0,5,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862915039s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826495171s) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762207031s@ mbc={}] start_peering_interval up [1,2,0] -> [1,5,0], acting [1,2,0] -> [1,5,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826495171s) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.762207031s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.929140091s) [1,3,5] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864257812s@ mbc={}] start_peering_interval up [0,5,1] -> [1,3,5], acting [0,5,1] -> [1,3,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.836656570s) [0,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772705078s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.926836967s) [0,1,5] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863037109s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,5], acting [0,5,1] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.926794052s) [0,1,5] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863037109s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.836275101s) [0,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772705078s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826016426s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762573242s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825971603s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762573242s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845648766s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.390869141s@ mbc={}] start_peering_interval up [4,3,2] -> [3,1,5], acting [4,3,2] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826076508s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762939453s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845921516s) [1,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.390869141s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826044083s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762939453s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928199768s) [1,5,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.865112305s@ mbc={}] start_peering_interval up [0,5,1] -> [1,5,3], acting [0,5,1] -> [1,5,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824212074s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761230469s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,3], acting [1,2,0] -> [4,5,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824177742s) [4,5,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761230469s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928199768s) [1,5,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.865112305s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927745819s) [4,5,3] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864990234s@ mbc={}] start_peering_interval up [0,5,1] -> [4,5,3], acting [0,5,1] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.927716255s) [4,5,3] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825330734s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762695312s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,0], acting [1,2,0] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.925787926s) [1,0,2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863159180s@ mbc={}] start_peering_interval up [0,5,1] -> [1,0,2], acting [0,5,1] -> [1,0,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845539093s) [3,1,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.390869141s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.932962418s) [1,2,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.870483398s@ mbc={}] start_peering_interval up [0,5,1] -> [1,2,0], acting [0,5,1] -> [1,2,0], acting_primary 0 -> 1, up_primary 0 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823582649s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761352539s@ mbc={}] start_peering_interval up [1,2,0] -> [4,3,2], acting [1,2,0] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.932962418s) [1,2,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.870483398s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823471069s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761352539s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.834334373s) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772338867s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,3], acting [1,2,0] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.834334373s) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.772338867s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.850410461s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1126.392089844s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826417923s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764770508s@ mbc={}] start_peering_interval up [2,1,3] -> [0,2,4], acting [2,1,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826375961s) [0,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764770508s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.833748817s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772216797s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.833718300s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772216797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824622154s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762695312s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.833834648s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772705078s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,4], acting [1,2,0] -> [0,5,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.833806038s) [0,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772705078s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.926295280s) [0,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.865112305s@ mbc={}] start_peering_interval up [0,5,1] -> [0,2,4], acting [0,5,1] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849005699s) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787841797s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.925787926s) [1,0,2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.863159180s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.849005699s) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.787841797s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.929140091s) [1,3,5] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.864257812s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825627327s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764892578s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825596809s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764892578s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846673965s) [0,5,1] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1126.392089844s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.926177025s) [0,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.865112305s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848356247s) [2,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787841797s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848320961s) [2,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.787841797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.930783272s) [5,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.870483398s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825520515s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765380859s@ mbc={}] start_peering_interval up [2,1,3] -> [4,5,0], acting [2,1,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.930649757s) [5,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.870483398s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825486183s) [4,5,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765380859s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.832449913s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772705078s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848687172s) [2,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788818359s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848627090s) [2,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788818359s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.930377960s) [5,3,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.870605469s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,1], acting [0,5,1] -> [5,3,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824584007s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764892578s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824550629s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764892578s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.930230141s) [5,3,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.870605469s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847454071s) [2,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788085938s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847381592s) [2,3,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788085938s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824898720s) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765747070s@ mbc={}] start_peering_interval up [2,1,3] -> [1,2,3], acting [2,1,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824898720s) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.765747070s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824740410s) [2,4,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765625000s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,3], acting [2,1,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.824719429s) [2,4,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765625000s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848077774s) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.789184570s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.848077774s) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.789184570s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823432922s) [3,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764648438s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823374748s) [3,2,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764648438s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.819507599s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.760986328s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,5], acting [1,2,0] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.819473267s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.760986328s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928863525s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.870361328s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.832356453s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772705078s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.922599792s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864257812s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.12( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.922512054s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864257812s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.922941208s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.864868164s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.829498291s) [4,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.771606445s@ mbc={}] start_peering_interval up [1,2,0] -> [4,2,3], acting [1,2,0] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.829460144s) [4,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.771606445s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.922901154s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864868164s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823564529s) [2,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765869141s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,0], acting [2,1,3] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823542595s) [2,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765869141s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.840581894s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.782958984s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.847092628s) [4,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788818359s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.840559959s) [2,4,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.782958984s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.928804398s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.870361328s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846509933s) [4,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.789184570s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.846436501s) [4,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788818359s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.921212196s) [3,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864379883s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,4], acting [0,5,1] -> [3,2,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.921135902s) [3,2,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864379883s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.15( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845932961s) [4,5,3] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.789184570s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845388412s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788696289s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,0], acting [3,5,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920745850s) [5,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864257812s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920717239s) [5,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864257812s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.821426392s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765136719s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,1], acting [2,1,3] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.844132423s) [5,1,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787841797s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,0], acting [3,5,1] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.844095230s) [5,1,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.787841797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.827611923s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.771606445s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920977592s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864990234s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.821032524s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765136719s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.827483177s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.771606445s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920872688s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820359230s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764526367s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818600655s) [2,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762939453s@ mbc={}] start_peering_interval up [1,2,0] -> [2,0,4], acting [1,2,0] -> [2,0,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820246696s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764526367s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818562508s) [2,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762939453s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920592308s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.864990234s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.845326424s) [4,2,0] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788696289s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.843921661s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788452148s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.843888283s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788452148s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920494080s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920421600s) [3,5,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.865112305s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,1], acting [0,5,1] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826794624s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.771606445s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.920372009s) [3,5,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.865112305s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.819837570s) [5,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764526367s@ mbc={}] start_peering_interval up [2,1,3] -> [5,0,4], acting [2,1,3] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826756477s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.771606445s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820538521s) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765380859s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.819711685s) [5,0,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764526367s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820538521s) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.765380859s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.843686104s) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788696289s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.917548180s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862670898s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820011139s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765136719s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,2], acting [2,1,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.1( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.917467117s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.817783356s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762939453s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,4], acting [1,2,0] -> [5,3,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.819971085s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765136719s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.843686104s) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.788696289s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.837242126s) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.782714844s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.837242126s) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.782714844s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826569557s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.771972656s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820292473s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765869141s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.817693710s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762939453s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.10( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.820207596s) [3,1,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765869141s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.917726517s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863525391s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.917680740s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863525391s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.826426506s) [3,1,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.771972656s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818634033s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764526367s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,5], acting [2,1,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.919013977s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.864990234s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.843000412s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.789062500s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.918905258s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818584442s) [3,4,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764526367s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.842966080s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.789062500s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.842095375s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788574219s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.918217659s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.864746094s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.842042923s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788574219s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.917343140s) [5,1,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863769531s@ mbc={}] start_peering_interval up [0,5,1] -> [5,1,0], acting [0,5,1] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.918155670s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864746094s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825765610s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772338867s@ mbc={}] start_peering_interval up [1,2,0] -> [4,0,5], acting [1,2,0] -> [4,0,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818929672s) [3,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.765747070s@ mbc={}] start_peering_interval up [2,1,3] -> [3,5,4], acting [2,1,3] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.818904877s) [3,5,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.765747070s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825693130s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772338867s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916581154s) [2,3,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863403320s@ mbc={}] start_peering_interval up [0,5,1] -> [2,3,1], acting [0,5,1] -> [2,3,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916562080s) [2,3,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863403320s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825043678s) [3,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.772216797s@ mbc={}] start_peering_interval up [1,2,0] -> [3,5,1], acting [1,2,0] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.825003624s) [3,5,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.772216797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.915536880s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862915039s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835691452s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783081055s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.817013741s) [1,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764404297s@ mbc={}] start_peering_interval up [2,1,3] -> [1,3,2], acting [2,1,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.915475845s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862915039s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916992188s) [5,1,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.817013741s) [1,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.764404297s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835575104s) [3,4,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783081055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.915884018s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863525391s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835243225s) [5,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.782958984s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,1], acting [3,5,1] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.915850639s) [5,3,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863525391s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823862076s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.771606445s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835188866s) [5,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.782958984s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835121155s) [2,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783081055s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.835098267s) [2,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783081055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.823820114s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.771606445s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.816261292s) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764404297s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,0], acting [2,1,3] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.816261292s) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.764404297s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.914862633s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863159180s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916447639s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.864868164s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.914795876s) [3,4,5] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863159180s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916322708s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864868164s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916034698s) [3,2,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864746094s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.6( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.1f( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.813672066s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762573242s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.813630104s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762573242s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.815129280s) [1,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764282227s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,5], acting [2,1,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.815129280s) [1,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.764282227s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.840201378s) [2,0,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.789428711s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,1], acting [3,5,1] -> [2,0,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,2,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.915957451s) [3,2,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864746094s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.913620949s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863281250s@ mbc={}] start_peering_interval up [0,5,1] -> [2,1,3], acting [0,5,1] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.1d( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.913477898s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863281250s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.839816093s) [2,0,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.789428711s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.839950562s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.790161133s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,4], acting [3,5,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.839911461s) [0,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.790161133s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.913022995s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.863281250s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912963867s) [2,1,3] r=1 lpr=44 pi=[42,44)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863281250s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.806243896s) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.756713867s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.806243896s) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.756713867s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811546326s) [2,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762207031s@ mbc={}] start_peering_interval up [1,2,0] -> [2,1,0], acting [1,2,0] -> [2,1,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812458992s) [3,4,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763061523s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,2], acting [2,1,3] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811502457s) [2,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762207031s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.838316917s) [1,2,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788940430s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812404633s) [3,4,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.763061523s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.838316917s) [1,2,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.788940430s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912912369s) [5,0,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863647461s@ mbc={}] start_peering_interval up [0,5,1] -> [5,0,4], acting [0,5,1] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812870026s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763671875s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912864685s) [5,0,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863647461s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812843323s) [4,2,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.763671875s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.836906433s) [1,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787841797s@ mbc={}] start_peering_interval up [3,5,1] -> [1,0,2], acting [3,5,1] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.836906433s) [1,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1130.787841797s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912104607s) [3,2,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.863281250s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912075996s) [3,2,1] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.863281250s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811706543s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763061523s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,5], acting [2,1,3] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811684608s) [4,0,5] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.763061523s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.836580276s) [0,2,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788085938s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.836509705s) [0,2,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788085938s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.810488701s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762084961s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.810460091s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762084961s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.831286430s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783081055s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,4], acting [3,5,1] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812083244s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763916016s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,4], acting [2,1,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.831256866s) [3,2,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783081055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.812019348s) [5,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.763916016s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912158966s) [2,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.864135742s@ mbc={}] start_peering_interval up [0,5,1] -> [2,4,0], acting [0,5,1] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.912133217s) [2,4,0] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.864135742s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.911255836s) [3,5,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.862915039s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,4], acting [0,5,1] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809673309s) [2,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761718750s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,1], acting [1,2,0] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809649467s) [2,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761718750s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811560631s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763916016s@ mbc={}] start_peering_interval up [2,1,3] -> [4,3,2], acting [2,1,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811540604s) [4,3,2] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.763916016s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811522484s) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.763916016s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811522484s) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.763916016s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.910507202s) [3,5,4] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862915039s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.830570221s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783203125s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.830540657s) [4,3,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783203125s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809476852s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.762207031s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811616898s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764404297s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,0], acting [2,1,3] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809395790s) [5,3,1] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.762207031s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.811584473s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764404297s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.834493637s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787475586s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.834406853s) [4,0,5] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.787475586s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.807817459s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761474609s@ mbc={}] start_peering_interval up [1,2,0] -> [2,4,0], acting [1,2,0] -> [2,4,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.807776451s) [2,4,0] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761474609s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.810615540s) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764404297s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.810615540s) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown pruub 1136.764404297s@ mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.829093933s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783081055s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.833179474s) [2,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787353516s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916498184s) [3,1,2] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.870727539s@ mbc={}] start_peering_interval up [0,5,1] -> [3,1,2], acting [0,5,1] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.828983307s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.783325195s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,2], acting [3,5,1] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.833130836s) [2,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.787353516s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.916423798s) [3,1,2] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.870727539s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.828905106s) [3,4,2] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783325195s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.810091972s) [5,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764282227s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,0], acting [2,1,3] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.807322502s) [2,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.761840820s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,4], acting [1,2,0] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809775352s) [5,1,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764282227s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.807270050s) [2,3,4] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.761840820s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809228897s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764160156s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.832715034s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.787597656s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809169769s) [5,1,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764160156s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.832682610s) [3,5,4] r=-1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.787597656s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.828961372s) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.783081055s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809296608s) [4,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active pruub 1136.764404297s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,3], acting [2,1,3] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44 pruub=14.809264183s) [4,2,3] r=-1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1136.764404297s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.833619118s) [0,5,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active pruub 1130.788818359s@ mbc={}] start_peering_interval up [3,5,1] -> [0,5,1], acting [3,5,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44 pruub=8.833168983s) [0,5,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1130.788818359s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.906793594s) [3,4,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active pruub 1132.862792969s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,2], acting [0,5,1] -> [3,4,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44 pruub=10.906724930s) [3,4,2] r=-1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.8( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:00:43 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 23 03:00:43 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.b( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,2,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,2,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,2,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.18( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [0,2,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.16( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [0,5,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [0,5,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,3,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,4,5] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.17( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,1,5] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.14( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.a( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,1,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.15( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [2,4,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.2( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.14( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,5,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.1a( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,3] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.13( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,4,2] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,2,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,2,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,4,0] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.1d( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,4,5] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,0,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [0,5,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.1f( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [3,4,5] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,2,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.14( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [2,4,0] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.1( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,5] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,5] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [2,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.1a( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,4,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,0,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,5,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,1,3] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.e( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [3,4,2] r=1 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [5,3,1] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.17( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,2,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,4,2] r=1 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.7( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,3,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [3,5,4] r=2 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.3( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,4,0] r=1 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [5,0,4] r=2 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,0,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 44 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [5,3,4] r=2 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[5.8( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[6.1e( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [4,5,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[6.12( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [4,2,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.c( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,3,5] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,5,3] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.3( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.10( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,0,5] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[5.d( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,5,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[5.e( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,0,2] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,2,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,0,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,0,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[6.1b( empty local-lis/les=44/45 n=0 ec=42/31 lis/c=42/42 les/c/f=43/43/0 sis=44) [1,2,0] r=0 lpr=44 pi=[42,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[5.b( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,0,5] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,0] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,2,0] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=40/25 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [1,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.19( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[31668]: osd.1 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [1,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[3.1a( empty local-lis/les=44/45 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.15( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.1f( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.12( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,3,2] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.1d( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,5,3] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=38/19 lis/c=38/38 les/c/f=39/39/0 sis=44) [4,2,3] r=0 lpr=44 pi=[38,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:44 localhost ceph-osd[32615]: osd.4 pg_epoch: 45 pg[4.6( empty local-lis/les=44/45 n=0 ec=40/23 lis/c=40/40 les/c/f=41/41/0 sis=44) [4,3,2] r=0 lpr=44 pi=[40,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878570557s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862548828s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878907204s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862915039s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.2( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878608704s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862792969s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878429413s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862548828s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878780365s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862915039s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.2( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878547668s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878751755s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1132.862792969s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:45 localhost ceph-osd[31668]: osd.1 pg_epoch: 46 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=8.878367424s) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1132.862792969s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:46 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 23 03:00:50 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 23 03:00:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:00:55 localhost podman[55626]: 2025-11-23 08:00:55.181598678 +0000 UTC m=+0.083178547 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12)
Nov 23 03:00:55 localhost podman[55626]: 2025-11-23 08:00:55.361782983 +0000 UTC m=+0.263362852 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:00:55 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979520798s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1146.991943359s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.978505135s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1146.990966797s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.b( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.979320526s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.991943359s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.978445053s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1146.990966797s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.978412628s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.990966797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.978363037s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.990966797s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.976034164s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1146.988891602s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:00:55 localhost ceph-osd[31668]: osd.1 pg_epoch: 48 pg[7.3( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48 pruub=12.975981712s) [3,2,4] r=-1 lpr=48 pi=[44,48)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1146.988891602s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:56 localhost ceph-osd[32615]: osd.4 pg_epoch: 48 pg[7.b( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48) [3,2,4] r=2 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:56 localhost ceph-osd[32615]: osd.4 pg_epoch: 48 pg[7.7( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48) [3,2,4] r=2 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:56 localhost ceph-osd[32615]: osd.4 pg_epoch: 48 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48) [3,2,4] r=2 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:56 localhost ceph-osd[32615]: osd.4 pg_epoch: 48 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=48) [3,2,4] r=2 lpr=48 pi=[44,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:00:58 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 23 03:00:58 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 23 03:00:59 localhost python3[55670]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:00:59 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 23 03:00:59 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 23 03:01:00 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 23 03:01:00 localhost python3[55686]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:00 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 23 03:01:01 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts
Nov 23 03:01:01 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok
Nov 23 03:01:02 localhost ceph-osd[31668]: osd.1 pg_epoch: 50 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.711714745s) [0,5,4] r=-1 lpr=50 pi=[42,50)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1156.863769531s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:02 localhost ceph-osd[31668]: osd.1 pg_epoch: 50 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.711639404s) [0,5,4] r=-1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1156.863769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:02 localhost ceph-osd[31668]: osd.1 pg_epoch: 50 pg[7.4( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.711989403s) [0,5,4] r=-1 lpr=50 pi=[42,50)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1156.864135742s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:02 localhost ceph-osd[31668]: osd.1 pg_epoch: 50 pg[7.4( v 35'39 (0'0,35'39] local-lis/les=42/43 n=2 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=15.711897850s) [0,5,4] r=-1 lpr=50 pi=[42,50)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1156.864135742s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:02 localhost python3[55713]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:02 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 23 03:01:03 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 23 03:01:03 localhost ceph-osd[32615]: osd.4 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50) [0,5,4] r=2 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:03 localhost ceph-osd[32615]: osd.4 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=50) [0,5,4] r=2 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:04 localhost ceph-osd[32615]: osd.4 pg_epoch: 52 pg[7.d( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52) [4,0,2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:01:04 localhost ceph-osd[32615]: osd.4 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52) [4,0,2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:01:04 localhost ceph-osd[31668]: osd.1 pg_epoch: 52 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.792321205s) [4,0,2] r=-1 lpr=52 pi=[44,52)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1154.989990234s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:04 localhost ceph-osd[31668]: osd.1 pg_epoch: 52 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.791239738s) [4,0,2] r=-1 lpr=52 pi=[44,52)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1154.988891602s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:04 localhost ceph-osd[31668]: osd.1 pg_epoch: 52 pg[7.5( v 35'39 (0'0,35'39] local-lis/les=44/45 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.792231560s) [4,0,2] r=-1 lpr=52 pi=[44,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1154.989990234s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:04 localhost ceph-osd[31668]: osd.1 pg_epoch: 52 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52 pruub=11.790996552s) [4,0,2] r=-1 lpr=52 pi=[44,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1154.988891602s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:04 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 23 03:01:05 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 23 03:01:05 localhost ceph-osd[32615]: osd.4 pg_epoch: 53 pg[7.d( v 35'39 lc 35'13 (0'0,35'39] local-lis/les=52/53 n=1 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52) [4,0,2] r=0 lpr=52 pi=[44,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:01:05 localhost ceph-osd[32615]: osd.4 pg_epoch: 53 pg[7.5( v 35'39 lc 35'9 (0'0,35'39] local-lis/les=52/53 n=2 ec=42/33 lis/c=44/44 les/c/f=45/47/0 sis=52) [4,0,2] r=0 lpr=52 pi=[44,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:01:06 localhost python3[55761]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:06 localhost python3[55804]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884865.7779784-92282-259367968207945/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=5f137984986c8cf5df5aec7749430e0dc129d0db backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:06 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 23 03:01:06 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 23 03:01:07 localhost ceph-osd[31668]: osd.1 pg_epoch: 54 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.265635490s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1157.069702148s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:07 localhost ceph-osd[31668]: osd.1 pg_epoch: 54 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.342354774s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1157.146484375s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:07 localhost ceph-osd[31668]: osd.1 pg_epoch: 54 pg[7.6( v 35'39 (0'0,35'39] local-lis/les=46/47 n=2 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.265490532s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1157.069702148s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:07 localhost ceph-osd[31668]: osd.1 pg_epoch: 54 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=11.342193604s) [0,2,4] r=-1 lpr=54 pi=[46,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1157.146484375s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:08 localhost ceph-osd[32615]: osd.4 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=2 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:08 localhost ceph-osd[32615]: osd.4 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=54) [0,2,4] r=2 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:08 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 23 03:01:09 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 23 03:01:09 localhost ceph-osd[32615]: osd.4 pg_epoch: 56 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.383334160s) [2,1,3] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1154.822509766s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:09 localhost ceph-osd[32615]: osd.4 pg_epoch: 56 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.383235931s) [2,1,3] r=-1 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1154.822509766s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:09 localhost ceph-osd[32615]: osd.4 pg_epoch: 56 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.382706642s) [2,1,3] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1154.822509766s@ mbc={}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:09 localhost ceph-osd[32615]: osd.4 pg_epoch: 56 pg[7.7( v 35'39 (0'0,35'39] local-lis/les=48/49 n=1 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=11.382533073s) [2,1,3] r=-1 lpr=56 pi=[48,56)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1154.822509766s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:09 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 23 03:01:09 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 23 03:01:10 localhost ceph-osd[31668]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,1,3] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:10 localhost ceph-osd[31668]: osd.1 pg_epoch: 56 pg[7.7( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,1,3] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:10 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 23 03:01:11 localhost python3[55866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:11 localhost ceph-osd[31668]: osd.1 pg_epoch: 58 pg[7.8( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=58 pruub=15.003788948s) [3,2,1] r=2 lpr=58 pi=[42,58)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1164.863891602s@ mbc={}] start_peering_interval up [5,1,3] -> [3,2,1], acting [5,1,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:11 localhost ceph-osd[31668]: osd.1 pg_epoch: 58 pg[7.8( v 35'39 (0'0,35'39] local-lis/les=42/43 n=1 ec=42/33 lis/c=42/42 les/c/f=43/43/0 sis=58 pruub=15.003592491s) [3,2,1] r=2 lpr=58 pi=[42,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1164.863891602s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:11 localhost python3[55909]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884871.0604374-92282-60399466370976/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=8a18e979d41caf333cb312628abb5051e6d0049c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:13 localhost ceph-osd[31668]: osd.1 pg_epoch: 60 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=11.023200989s) [0,4,2] r=-1 lpr=60 pi=[44,60)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1162.993652344s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,2], acting [2,1,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:13 localhost ceph-osd[31668]: osd.1 pg_epoch: 60 pg[7.9( v 35'39 (0'0,35'39] local-lis/les=44/45 n=1 ec=42/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=11.023061752s) [0,4,2] r=-1 lpr=60 pi=[44,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1162.993652344s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:13 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 23 03:01:13 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 23 03:01:13 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 23 03:01:14 localhost ceph-osd[32615]: osd.4 pg_epoch: 60 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=44/44 les/c/f=45/45/0 sis=60) [0,4,2] r=1 lpr=60 pi=[44,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:14 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 23 03:01:14 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 23 03:01:15 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 23 03:01:16 localhost ceph-osd[32615]: osd.4 pg_epoch: 62 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62) [4,0,5] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 23 03:01:16 localhost ceph-osd[31668]: osd.1 pg_epoch: 62 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=10.256892204s) [4,0,5] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 1165.069824219s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:16 localhost ceph-osd[31668]: osd.1 pg_epoch: 62 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=46/47 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=10.256776810s) [4,0,5] r=-1 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1165.069824219s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:16 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 23 03:01:16 localhost python3[55971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:16 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 23 03:01:17 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 23 03:01:17 localhost python3[56014]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884876.4738395-92282-210758473670072/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=ae43e71821d6a319ccba3331b262b98567ce770b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:17 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 23 03:01:17 localhost ceph-osd[32615]: osd.4 pg_epoch: 63 pg[7.a( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=42/33 lis/c=46/46 les/c/f=47/47/0 sis=62) [4,0,5] r=0 lpr=62 pi=[46,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 23 03:01:17 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 23 03:01:18 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 23 03:01:19 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 23 03:01:21 localhost python3[56076]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:21 localhost python3[56121]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884881.2389357-92642-187758938556315/source _original_basename=tmpoa1m3vu7 follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:23 localhost python3[56183]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:23 localhost python3[56226]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884882.8583655-92847-170109872462996/source _original_basename=tmpctwfqoq_ follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:23 localhost ceph-osd[32615]: osd.4 pg_epoch: 64 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=50/51 n=1 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=12.218492508s) [2,3,4] r=2 lpr=64 pi=[50,64)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1169.798461914s@ mbc={}] start_peering_interval up [0,5,4] -> [2,3,4], acting [0,5,4] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:23 localhost ceph-osd[32615]: osd.4 pg_epoch: 64 pg[7.c( v 35'39 (0'0,35'39] local-lis/les=50/51 n=1 ec=42/33 lis/c=50/50 les/c/f=51/51/0 sis=64 pruub=12.217814445s) [2,3,4] r=2 lpr=64 pi=[50,64)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1169.798461914s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:23 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Nov 23 03:01:23 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Nov 23 03:01:23 localhost python3[56256]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Nov 23 03:01:24 localhost python3[56274]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:01:24 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Nov 23 03:01:25 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Nov 23 03:01:25 localhost ceph-osd[32615]: osd.4 pg_epoch: 66 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=12.308586121s) [2,3,1] r=-1 lpr=66 pi=[52,66)/1 crt=35'39 mlcod 0'0 active pruub 1171.894775391s@ mbc={255={}}] start_peering_interval up [4,0,2] -> [2,3,1], acting [4,0,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:25 localhost ceph-osd[32615]: osd.4 pg_epoch: 66 pg[7.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=12.308259010s) [2,3,1] r=-1 lpr=66 pi=[52,66)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1171.894775391s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:01:25 localhost podman[56447]: 2025-11-23 08:01:25.841491765 +0000 UTC m=+0.103093004 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git)
Nov 23 03:01:25 localhost ansible-async_wrapper.py[56446]: Invoked with 267267747237 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763884885.3485894-92940-129126099920247/AnsiballZ_command.py _
Nov 23 03:01:25 localhost ansible-async_wrapper.py[56477]: Starting module and watcher
Nov 23 03:01:25 localhost ansible-async_wrapper.py[56477]: Start watching 56478 (3600)
Nov 23 03:01:25 localhost ansible-async_wrapper.py[56478]: Start module (56478)
Nov 23 03:01:25 localhost ansible-async_wrapper.py[56446]: Return async_wrapper task started.
Nov 23 03:01:26 localhost podman[56447]: 2025-11-23 08:01:26.067776495 +0000 UTC m=+0.329377684 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:01:26 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:01:26 localhost python3[56498]: ansible-ansible.legacy.async_status Invoked with jid=267267747237.56446 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:01:26 localhost ceph-osd[31668]: osd.1 pg_epoch: 66 pg[7.d( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=52/52 les/c/f=53/53/0 sis=66) [2,3,1] r=2 lpr=66 pi=[52,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:27 localhost ceph-osd[32615]: osd.4 pg_epoch: 68 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=12.777397156s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1174.462890625s@ mbc={}] start_peering_interval up [0,2,4] -> [3,1,5], acting [0,2,4] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:27 localhost ceph-osd[32615]: osd.4 pg_epoch: 68 pg[7.e( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68 pruub=12.776398659s) [3,1,5] r=-1 lpr=68 pi=[54,68)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1174.462890625s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:28 localhost ceph-osd[31668]: osd.1 pg_epoch: 68 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=54/54 les/c/f=55/55/0 sis=68) [3,1,5] r=1 lpr=68 pi=[54,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:28 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 23 03:01:29 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 23 03:01:29 localhost puppet-user[56496]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 03:01:29 localhost puppet-user[56496]:   (file: /etc/puppet/hiera.yaml)
Nov 23 03:01:29 localhost puppet-user[56496]: Warning: Undefined variable '::deploy_config_name';
Nov 23 03:01:29 localhost puppet-user[56496]:   (file & line not available)
Nov 23 03:01:29 localhost puppet-user[56496]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 03:01:29 localhost puppet-user[56496]:   (file & line not available)
Nov 23 03:01:29 localhost puppet-user[56496]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Nov 23 03:01:29 localhost puppet-user[56496]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Nov 23 03:01:29 localhost puppet-user[56496]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.11 seconds
Nov 23 03:01:29 localhost puppet-user[56496]: Notice: Applied catalog in 0.03 seconds
Nov 23 03:01:29 localhost puppet-user[56496]: Application:
Nov 23 03:01:29 localhost puppet-user[56496]:   Initial environment: production
Nov 23 03:01:29 localhost puppet-user[56496]:   Converged environment: production
Nov 23 03:01:29 localhost puppet-user[56496]:         Run mode: user
Nov 23 03:01:29 localhost puppet-user[56496]: Changes:
Nov 23 03:01:29 localhost puppet-user[56496]: Events:
Nov 23 03:01:29 localhost puppet-user[56496]: Resources:
Nov 23 03:01:29 localhost puppet-user[56496]:            Total: 10
Nov 23 03:01:29 localhost puppet-user[56496]: Time:
Nov 23 03:01:29 localhost puppet-user[56496]:         Schedule: 0.00
Nov 23 03:01:29 localhost puppet-user[56496]:             File: 0.00
Nov 23 03:01:29 localhost puppet-user[56496]:             Exec: 0.01
Nov 23 03:01:29 localhost puppet-user[56496]:           Augeas: 0.01
Nov 23 03:01:29 localhost puppet-user[56496]:   Transaction evaluation: 0.02
Nov 23 03:01:29 localhost puppet-user[56496]:   Catalog application: 0.03
Nov 23 03:01:29 localhost puppet-user[56496]:   Config retrieval: 0.14
Nov 23 03:01:29 localhost puppet-user[56496]:         Last run: 1763884889
Nov 23 03:01:29 localhost puppet-user[56496]:       Filebucket: 0.00
Nov 23 03:01:29 localhost puppet-user[56496]:            Total: 0.04
Nov 23 03:01:29 localhost puppet-user[56496]: Version:
Nov 23 03:01:29 localhost puppet-user[56496]:           Config: 1763884889
Nov 23 03:01:29 localhost puppet-user[56496]:           Puppet: 7.10.0
Nov 23 03:01:29 localhost ansible-async_wrapper.py[56478]: Module complete (56478)
Nov 23 03:01:29 localhost ceph-osd[31668]: osd.1 pg_epoch: 70 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.064504623s) [0,4,5] r=-1 lpr=70 pi=[56,70)/1 luod=0'0 crt=35'39 mlcod 0'0 active pruub 1181.208007812s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,5], acting [2,1,3] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 23 03:01:29 localhost ceph-osd[31668]: osd.1 pg_epoch: 70 pg[7.f( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.064048767s) [0,4,5] r=-1 lpr=70 pi=[56,70)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 1181.208007812s@ mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:29 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 23 03:01:29 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 23 03:01:30 localhost ansible-async_wrapper.py[56477]: Done in kid B.
Nov 23 03:01:30 localhost ceph-osd[32615]: osd.4 pg_epoch: 70 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/33 lis/c=56/56 les/c/f=57/57/0 sis=70) [0,4,5] r=1 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Nov 23 03:01:31 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 23 03:01:32 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 23 03:01:32 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 23 03:01:33 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 23 03:01:35 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Nov 23 03:01:35 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Nov 23 03:01:36 localhost python3[56701]: ansible-ansible.legacy.async_status Invoked with jid=267267747237.56446 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:01:37 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 23 03:01:37 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 23 03:01:37 localhost python3[56717]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:01:37 localhost python3[56733]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:01:37 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 23 03:01:37 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 23 03:01:38 localhost python3[56783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:38 localhost python3[56801]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp9zzfygs7 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:01:38 localhost python3[56831]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:39 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 23 03:01:39 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 23 03:01:39 localhost python3[56934]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Nov 23 03:01:40 localhost python3[56953]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:40 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 23 03:01:41 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 23 03:01:41 localhost python3[56985]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:01:41 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1 deep-scrub starts
Nov 23 03:01:41 localhost python3[57035]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:41 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1 deep-scrub ok
Nov 23 03:01:42 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Nov 23 03:01:42 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Nov 23 03:01:42 localhost python3[57053]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:42 localhost python3[57115]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:42 localhost python3[57133]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:43 localhost python3[57195]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:43 localhost python3[57213]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:44 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 23 03:01:44 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 23 03:01:44 localhost python3[57275]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:44 localhost python3[57293]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:45 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 23 03:01:45 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 23 03:01:45 localhost python3[57323]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:01:45 localhost systemd[1]: Reloading.
Nov 23 03:01:45 localhost systemd-rc-local-generator[57348]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:01:45 localhost systemd-sysv-generator[57353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:01:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:01:46 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 23 03:01:47 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 23 03:01:47 localhost python3[57409]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:47 localhost python3[57427]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:48 localhost python3[57489]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:01:48 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 23 03:01:48 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 23 03:01:48 localhost python3[57507]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:48 localhost python3[57537]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:01:48 localhost systemd[1]: Reloading.
Nov 23 03:01:48 localhost systemd-sysv-generator[57569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:01:48 localhost systemd-rc-local-generator[57565]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:01:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:01:49 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 23 03:01:49 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 23 03:01:49 localhost systemd[1]: Starting Create netns directory...
Nov 23 03:01:49 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 03:01:49 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 03:01:49 localhost systemd[1]: Finished Create netns directory.
Nov 23 03:01:49 localhost python3[57596]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 03:01:51 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 23 03:01:51 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 23 03:01:51 localhost python3[57655]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 03:01:51 localhost podman[57722]: 2025-11-23 08:01:51.496496327 +0000 UTC m=+0.077408884 container create 200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step2, container_name=nova_compute_init_log, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com)
Nov 23 03:01:51 localhost systemd[1]: Started libpod-conmon-200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9.scope.
Nov 23 03:01:51 localhost podman[57744]: 2025-11-23 08:01:51.551883985 +0000 UTC m=+0.093192202 container create e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, batch=17.1_20251118.1, config_id=tripleo_step2, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, release=1761123044, container_name=nova_virtqemud_init_logs, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team)
Nov 23 03:01:51 localhost systemd[1]: Started libcrun container.
Nov 23 03:01:51 localhost podman[57722]: 2025-11-23 08:01:51.455450278 +0000 UTC m=+0.036362885 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:01:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c1e8ec154b353c1f5742760d5a341313065b707b9f4dfe4e57636918f18c91/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:01:51 localhost podman[57722]: 2025-11-23 08:01:51.571712081 +0000 UTC m=+0.152624658 container init 200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2)
Nov 23 03:01:51 localhost systemd[1]: Started libpod-conmon-e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7.scope.
Nov 23 03:01:51 localhost podman[57722]: 2025-11-23 08:01:51.581472209 +0000 UTC m=+0.162384786 container start 200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step2, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute_init_log, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:01:51 localhost python3[57655]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Nov 23 03:01:51 localhost systemd[1]: libpod-200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9.scope: Deactivated successfully.
Nov 23 03:01:51 localhost podman[57744]: 2025-11-23 08:01:51.493185279 +0000 UTC m=+0.034493546 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:01:51 localhost systemd[1]: Started libcrun container.
Nov 23 03:01:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac3b29840d2c794dc0c0033b626822dc9158444d4c44499bc50ec992e63998d/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Nov 23 03:01:51 localhost podman[57744]: 2025-11-23 08:01:51.622482447 +0000 UTC m=+0.163790664 container init e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, container_name=nova_virtqemud_init_logs, distribution-scope=public, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:01:51 localhost podman[57744]: 2025-11-23 08:01:51.631052464 +0000 UTC m=+0.172360681 container start e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, tcib_managed=true, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, container_name=nova_virtqemud_init_logs, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:01:51 localhost python3[57655]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Nov 23 03:01:51 localhost systemd[1]: libpod-e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7.scope: Deactivated successfully.
Nov 23 03:01:51 localhost podman[57768]: 2025-11-23 08:01:51.660911676 +0000 UTC m=+0.057902696 container died 200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:01:51 localhost podman[57791]: 2025-11-23 08:01:51.69691518 +0000 UTC m=+0.045828316 container died e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, container_name=nova_virtqemud_init_logs, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt)
Nov 23 03:01:51 localhost podman[57795]: 2025-11-23 08:01:51.792151015 +0000 UTC m=+0.133827029 container cleanup e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_virtqemud_init_logs, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 03:01:51 localhost systemd[1]: libpod-conmon-e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7.scope: Deactivated successfully.
Nov 23 03:01:51 localhost podman[57769]: 2025-11-23 08:01:51.844058141 +0000 UTC m=+0.242199691 container cleanup 200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, release=1761123044, container_name=nova_compute_init_log, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, config_id=tripleo_step2, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:01:51 localhost systemd[1]: libpod-conmon-200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9.scope: Deactivated successfully.
Nov 23 03:01:51 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 23 03:01:52 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 23 03:01:52 localhost podman[57919]: 2025-11-23 08:01:52.14009681 +0000 UTC m=+0.084606535 container create 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:01:52 localhost podman[57920]: 2025-11-23 08:01:52.172657373 +0000 UTC m=+0.107952343 container create e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, release=1761123044, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible)
Nov 23 03:01:52 localhost podman[57919]: 2025-11-23 08:01:52.091625275 +0000 UTC m=+0.036134970 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Nov 23 03:01:52 localhost systemd[1]: Started libpod-conmon-547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae.scope.
Nov 23 03:01:52 localhost systemd[1]: Started libpod-conmon-e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6.scope.
Nov 23 03:01:52 localhost podman[57920]: 2025-11-23 08:01:52.109354155 +0000 UTC m=+0.044649125 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:01:52 localhost systemd[1]: Started libcrun container.
Nov 23 03:01:52 localhost systemd[1]: Started libcrun container.
Nov 23 03:01:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75d69f0f2cde9fc9824305dd942961655ee26dd5e86c5ce60bd1c2a9ea6511d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 03:01:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d72a79238da9e41e472230adf30122e356dada3dee1ed84822dcd8584621e6/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Nov 23 03:01:52 localhost podman[57920]: 2025-11-23 08:01:52.234146763 +0000 UTC m=+0.169441723 container init e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step2, distribution-scope=public, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:01:52 localhost podman[57920]: 2025-11-23 08:01:52.246201184 +0000 UTC m=+0.181496154 container start e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step2, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public)
Nov 23 03:01:52 localhost podman[57920]: 2025-11-23 08:01:52.246476241 +0000 UTC m=+0.181771211 container attach e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 03:01:52 localhost podman[57919]: 2025-11-23 08:01:52.282827184 +0000 UTC m=+0.227336879 container init 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step2, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Nov 23 03:01:52 localhost podman[57919]: 2025-11-23 08:01:52.290564079 +0000 UTC m=+0.235073774 container start 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=create_haproxy_wrapper, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step2, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1)
Nov 23 03:01:52 localhost podman[57919]: 2025-11-23 08:01:52.290888447 +0000 UTC m=+0.235398142 container attach 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, container_name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:01:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6b49595181999b9547cadf7f9051c047b2ecfa5a5fcc5425cdb51bded6398f7-userdata-shm.mount: Deactivated successfully.
Nov 23 03:01:52 localhost systemd[1]: var-lib-containers-storage-overlay-02c1e8ec154b353c1f5742760d5a341313065b707b9f4dfe4e57636918f18c91-merged.mount: Deactivated successfully.
Nov 23 03:01:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-200a0caa49a02cb1050ed3fc59b02bf8cb7d219cbd790eff5e2884c04855d0d9-userdata-shm.mount: Deactivated successfully.
Nov 23 03:01:52 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 23 03:01:53 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 23 03:01:53 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 23 03:01:53 localhost ovs-vsctl[58021]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Nov 23 03:01:53 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 23 03:01:54 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 23 03:01:54 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 23 03:01:54 localhost systemd[1]: libpod-e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6.scope: Deactivated successfully.
Nov 23 03:01:54 localhost systemd[1]: libpod-e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6.scope: Consumed 2.100s CPU time.
Nov 23 03:01:54 localhost podman[57920]: 2025-11-23 08:01:54.348313165 +0000 UTC m=+2.283608145 container died e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step2, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:01:54 localhost systemd[1]: tmp-crun.G39yrh.mount: Deactivated successfully.
Nov 23 03:01:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6-userdata-shm.mount: Deactivated successfully.
Nov 23 03:01:54 localhost podman[58171]: 2025-11-23 08:01:54.456784041 +0000 UTC m=+0.093955182 container cleanup e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:01:54 localhost systemd[1]: libpod-conmon-e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6.scope: Deactivated successfully.
Nov 23 03:01:54 localhost python3[57655]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Nov 23 03:01:54 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 23 03:01:54 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 23 03:01:55 localhost systemd[1]: libpod-547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae.scope: Deactivated successfully.
Nov 23 03:01:55 localhost systemd[1]: libpod-547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae.scope: Consumed 2.118s CPU time.
Nov 23 03:01:55 localhost podman[57919]: 2025-11-23 08:01:55.120099806 +0000 UTC m=+3.064609581 container died 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step2, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, version=17.1.12, container_name=create_haproxy_wrapper, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']})
Nov 23 03:01:55 localhost podman[58213]: 2025-11-23 08:01:55.216261376 +0000 UTC m=+0.082326084 container cleanup 547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2)
Nov 23 03:01:55 localhost systemd[1]: libpod-conmon-547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae.scope: Deactivated successfully.
Nov 23 03:01:55 localhost python3[57655]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Nov 23 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay-84d72a79238da9e41e472230adf30122e356dada3dee1ed84822dcd8584621e6-merged.mount: Deactivated successfully.
Nov 23 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay-b75d69f0f2cde9fc9824305dd942961655ee26dd5e86c5ce60bd1c2a9ea6511d-merged.mount: Deactivated successfully.
Nov 23 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-547e811391ebe52d6a24dbcb103b454f49d7117d20287d261d151429a5eac0ae-userdata-shm.mount: Deactivated successfully.
Nov 23 03:01:55 localhost python3[58269]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:56 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 23 03:01:56 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 23 03:01:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:01:56 localhost systemd[1]: tmp-crun.42Lv1J.mount: Deactivated successfully.
Nov 23 03:01:56 localhost podman[58318]: 2025-11-23 08:01:56.511631489 +0000 UTC m=+0.099218431 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git)
Nov 23 03:01:56 localhost podman[58318]: 2025-11-23 08:01:56.690936483 +0000 UTC m=+0.278523415 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=)
Nov 23 03:01:56 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:01:57 localhost python3[58418]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005532586 step=2 update_config_hash_only=False
Nov 23 03:01:57 localhost python3[58434]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:01:58 localhost python3[58450]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 03:01:58 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 23 03:01:58 localhost ceph-osd[32615]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 23 03:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5108 writes, 22K keys, 5108 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5108 writes, 520 syncs, 9.82 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1853 writes, 6671 keys, 1853 commit groups, 1.0 writes per commit group, ingest: 2.71 MB, 0.00 MB/s#012Interval WAL: 1853 writes, 377 syncs, 4.92 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 23 03:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4430 writes, 20K keys, 4430 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4430 writes, 493 syncs, 8.99 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1046 writes, 3645 keys, 1046 commit groups, 1.0 writes per commit group, ingest: 1.94 MB, 0.00 MB/s#012Interval WAL: 1046 writes, 297 syncs, 3.52 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 23 03:02:06 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 23 03:02:07 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 23 03:02:10 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 23 03:02:10 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 23 03:02:13 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 23 03:02:13 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 23 03:02:18 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 23 03:02:18 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 23 03:02:20 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 23 03:02:20 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 23 03:02:22 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Nov 23 03:02:22 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Nov 23 03:02:23 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 23 03:02:23 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 23 03:02:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:02:27 localhost podman[58453]: 2025-11-23 08:02:27.189066066 +0000 UTC m=+0.089094483 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 23 03:02:27 localhost podman[58453]: 2025-11-23 08:02:27.384050146 +0000 UTC m=+0.284078583 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc.)
Nov 23 03:02:27 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:02:29 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 23 03:02:29 localhost ceph-osd[31668]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 23 03:02:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:02:58 localhost podman[58608]: 2025-11-23 08:02:58.174445046 +0000 UTC m=+0.080223208 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd)
Nov 23 03:02:58 localhost podman[58608]: 2025-11-23 08:02:58.391917211 +0000 UTC m=+0.297695303 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z)
Nov 23 03:02:58 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:03:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:03:29 localhost podman[58635]: 2025-11-23 08:03:29.171958869 +0000 UTC m=+0.074762302 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:03:29 localhost podman[58635]: 2025-11-23 08:03:29.354911852 +0000 UTC m=+0.257715285 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64)
Nov 23 03:03:29 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:03:34 localhost sshd[58709]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:04:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:04:00 localhost systemd[1]: tmp-crun.UT3czL.mount: Deactivated successfully.
Nov 23 03:04:00 localhost podman[58743]: 2025-11-23 08:04:00.205003681 +0000 UTC m=+0.098021112 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:04:00 localhost podman[58743]: 2025-11-23 08:04:00.412870251 +0000 UTC m=+0.305887722 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z)
Nov 23 03:04:00 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:04:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:04:31 localhost systemd[1]: tmp-crun.p7rDv7.mount: Deactivated successfully.
Nov 23 03:04:31 localhost podman[58772]: 2025-11-23 08:04:31.179900264 +0000 UTC m=+0.082326930 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd)
Nov 23 03:04:31 localhost podman[58772]: 2025-11-23 08:04:31.37495573 +0000 UTC m=+0.277382416 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team)
Nov 23 03:04:31 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:04:36 localhost systemd[1]: tmp-crun.0QQWxX.mount: Deactivated successfully.
Nov 23 03:04:36 localhost podman[58904]: 2025-11-23 08:04:36.180158851 +0000 UTC m=+0.100358957 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=553, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph)
Nov 23 03:04:36 localhost podman[58904]: 2025-11-23 08:04:36.309964459 +0000 UTC m=+0.230164535 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, ceph=True)
Nov 23 03:05:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:05:02 localhost podman[59045]: 2025-11-23 08:05:02.178621789 +0000 UTC m=+0.083622826 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1)
Nov 23 03:05:02 localhost podman[59045]: 2025-11-23 08:05:02.395232719 +0000 UTC m=+0.300233756 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64)
Nov 23 03:05:02 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:05:13 localhost sshd[59077]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:05:19 localhost sshd[59079]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:05:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:05:33 localhost podman[59080]: 2025-11-23 08:05:33.175991682 +0000 UTC m=+0.083369240 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:05:33 localhost podman[59080]: 2025-11-23 08:05:33.393993621 +0000 UTC m=+0.301371159 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:05:33 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:06:04 localhost podman[59186]: 2025-11-23 08:06:04.186488555 +0000 UTC m=+0.091302021 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:06:04 localhost podman[59186]: 2025-11-23 08:06:04.366999811 +0000 UTC m=+0.271813267 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, container_name=metrics_qdr)
Nov 23 03:06:04 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:06:29 localhost python3[59262]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:29 localhost python3[59307]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885189.0179436-99003-278395815162794/source _original_basename=tmpbtjch9cu follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:30 localhost python3[59337]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:06:32 localhost ansible-async_wrapper.py[59509]: Invoked with 151186691656 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885191.9621465-99317-116486014228774/AnsiballZ_command.py _
Nov 23 03:06:32 localhost ansible-async_wrapper.py[59512]: Starting module and watcher
Nov 23 03:06:32 localhost ansible-async_wrapper.py[59512]: Start watching 59513 (3600)
Nov 23 03:06:32 localhost ansible-async_wrapper.py[59513]: Start module (59513)
Nov 23 03:06:32 localhost ansible-async_wrapper.py[59509]: Return async_wrapper task started.
Nov 23 03:06:32 localhost python3[59533]: ansible-ansible.legacy.async_status Invoked with jid=151186691656.59509 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:06:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:06:35 localhost podman[59584]: 2025-11-23 08:06:35.008279645 +0000 UTC m=+0.089823162 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:06:35 localhost podman[59584]: 2025-11-23 08:06:35.19696772 +0000 UTC m=+0.278511247 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Nov 23 03:06:35 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:06:36 localhost puppet-user[59532]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 03:06:36 localhost puppet-user[59532]:   (file: /etc/puppet/hiera.yaml)
Nov 23 03:06:36 localhost puppet-user[59532]: Warning: Undefined variable '::deploy_config_name';
Nov 23 03:06:36 localhost puppet-user[59532]:   (file & line not available)
Nov 23 03:06:36 localhost puppet-user[59532]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 03:06:36 localhost puppet-user[59532]:   (file & line not available)
Nov 23 03:06:36 localhost puppet-user[59532]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Nov 23 03:06:36 localhost puppet-user[59532]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Nov 23 03:06:36 localhost puppet-user[59532]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.11 seconds
Nov 23 03:06:36 localhost puppet-user[59532]: Notice: Applied catalog in 0.04 seconds
Nov 23 03:06:36 localhost puppet-user[59532]: Application:
Nov 23 03:06:36 localhost puppet-user[59532]:   Initial environment: production
Nov 23 03:06:36 localhost puppet-user[59532]:   Converged environment: production
Nov 23 03:06:36 localhost puppet-user[59532]:         Run mode: user
Nov 23 03:06:36 localhost puppet-user[59532]: Changes:
Nov 23 03:06:36 localhost puppet-user[59532]: Events:
Nov 23 03:06:36 localhost puppet-user[59532]: Resources:
Nov 23 03:06:36 localhost puppet-user[59532]:            Total: 10
Nov 23 03:06:36 localhost puppet-user[59532]: Time:
Nov 23 03:06:36 localhost puppet-user[59532]:         Schedule: 0.00
Nov 23 03:06:36 localhost puppet-user[59532]:             File: 0.00
Nov 23 03:06:36 localhost puppet-user[59532]:             Exec: 0.01
Nov 23 03:06:36 localhost puppet-user[59532]:           Augeas: 0.01
Nov 23 03:06:36 localhost puppet-user[59532]:   Transaction evaluation: 0.03
Nov 23 03:06:36 localhost puppet-user[59532]:   Catalog application: 0.04
Nov 23 03:06:36 localhost puppet-user[59532]:   Config retrieval: 0.15
Nov 23 03:06:36 localhost puppet-user[59532]:         Last run: 1763885196
Nov 23 03:06:36 localhost puppet-user[59532]:       Filebucket: 0.00
Nov 23 03:06:36 localhost puppet-user[59532]:            Total: 0.04
Nov 23 03:06:36 localhost puppet-user[59532]: Version:
Nov 23 03:06:36 localhost puppet-user[59532]:           Config: 1763885196
Nov 23 03:06:36 localhost puppet-user[59532]:           Puppet: 7.10.0
Nov 23 03:06:36 localhost ansible-async_wrapper.py[59513]: Module complete (59513)
Nov 23 03:06:37 localhost ansible-async_wrapper.py[59512]: Done in kid B.
Nov 23 03:06:43 localhost python3[59767]: ansible-ansible.legacy.async_status Invoked with jid=151186691656.59509 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:06:43 localhost python3[59783]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:06:44 localhost python3[59799]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:06:45 localhost python3[59849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:45 localhost python3[59867]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp_hfa72mt recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:06:45 localhost python3[59897]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:47 localhost python3[60000]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Nov 23 03:06:47 localhost python3[60019]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:49 localhost python3[60051]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:06:49 localhost python3[60101]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:50 localhost python3[60119]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:50 localhost sshd[60120]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:06:50 localhost python3[60183]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:51 localhost python3[60201]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:51 localhost python3[60263]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:51 localhost python3[60281]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:52 localhost python3[60343]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:52 localhost python3[60361]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:53 localhost python3[60391]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:06:53 localhost systemd[1]: Reloading.
Nov 23 03:06:53 localhost systemd-rc-local-generator[60413]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:06:53 localhost systemd-sysv-generator[60419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:06:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:06:54 localhost python3[60476]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:54 localhost python3[60494]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:54 localhost python3[60556]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:06:55 localhost python3[60574]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:06:55 localhost python3[60604]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:06:55 localhost systemd[1]: Reloading.
Nov 23 03:06:55 localhost systemd-rc-local-generator[60627]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:06:55 localhost systemd-sysv-generator[60634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:06:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:06:56 localhost systemd[1]: Starting Create netns directory...
Nov 23 03:06:56 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 03:06:56 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 03:06:56 localhost systemd[1]: Finished Create netns directory.
Nov 23 03:06:56 localhost python3[60661]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 03:06:58 localhost python3[60719]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 03:06:58 localhost podman[60867]: 2025-11-23 08:06:58.835108873 +0000 UTC m=+0.073430194 container create 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, container_name=nova_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:06:58 localhost podman[60891]: 2025-11-23 08:06:58.859966518 +0000 UTC m=+0.064236928 container create 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, release=1761123044, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git)
Nov 23 03:06:58 localhost podman[60889]: 2025-11-23 08:06:58.867156911 +0000 UTC m=+0.075361906 container create 02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_init_log, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:06:58 localhost podman[60867]: 2025-11-23 08:06:58.788547189 +0000 UTC m=+0.026868530 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6.scope.
Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope.
Nov 23 03:06:58 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:58 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost podman[60899]: 2025-11-23 08:06:58.911881977 +0000 UTC m=+0.109836748 container create 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12)
Nov 23 03:06:58 localhost podman[60889]: 2025-11-23 08:06:58.823585795 +0000 UTC m=+0.031790830 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Nov 23 03:06:58 localhost podman[60891]: 2025-11-23 08:06:58.823982456 +0000 UTC m=+0.028252876 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Nov 23 03:06:58 localhost podman[60899]: 2025-11-23 08:06:58.847624549 +0000 UTC m=+0.045579310 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.scope.
Nov 23 03:06:58 localhost systemd[1]: Started libpod-conmon-02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc.scope.
Nov 23 03:06:58 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467d527827cab77d79ca943209511906e8bb483640c5d029ef06bb9a4c899f9d/merged/scripts supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467d527827cab77d79ca943209511906e8bb483640c5d029ef06bb9a4c899f9d/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49811fcc3e5d752fe49ab74a12b54f8b5604be5b8ba1bcaf72dfc24524c4f335/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:58 localhost podman[60867]: 2025-11-23 08:06:58.966172438 +0000 UTC m=+0.204493779 container init 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git)
Nov 23 03:06:58 localhost podman[60889]: 2025-11-23 08:06:58.972798496 +0000 UTC m=+0.181003491 container init 02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_init_log)
Nov 23 03:06:58 localhost podman[60867]: 2025-11-23 08:06:58.97596067 +0000 UTC m=+0.214282011 container start 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_virtlogd_wrapper, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:06:58 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:06:58 localhost podman[60889]: 2025-11-23 08:06:58.985442153 +0000 UTC m=+0.193647148 container start 02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., container_name=ceilometer_init_log, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:06:58 localhost systemd[1]: libpod-02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc.scope: Deactivated successfully.
Nov 23 03:06:58 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer
Nov 23 03:06:59 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:06:59 localhost systemd[1]: Created slice User Slice of UID 0.
Nov 23 03:06:59 localhost systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 03:06:59 localhost podman[60891]: 2025-11-23 08:06:59.015802235 +0000 UTC m=+0.220072675 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, container_name=rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:06:59 localhost systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 03:06:59 localhost podman[60891]: 2025-11-23 08:06:59.069059449 +0000 UTC m=+0.273329899 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, tcib_managed=true, container_name=rsyslog)
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:58.9699635 +0000 UTC m=+0.135460154 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:59.073369984 +0000 UTC m=+0.238866598 container create 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4)
Nov 23 03:06:59 localhost systemd[1]: Starting User Manager for UID 0...
Nov 23 03:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:06:59 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=3ea58633c99f05090f3faea662c628ca --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Nov 23 03:06:59 localhost podman[60899]: 2025-11-23 08:06:59.087154672 +0000 UTC m=+0.285109463 container init 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3)
Nov 23 03:06:59 localhost systemd[1]: Started libpod-conmon-501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa.scope.
Nov 23 03:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:06:59 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:06:59 localhost podman[60979]: 2025-11-23 08:06:59.140659453 +0000 UTC m=+0.136004787 container died 02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_id=tripleo_step3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public)
Nov 23 03:06:59 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:59 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5331fcbf7daa72bbe8d627667a57f81716bed82cfa9e39304d68f21f76c87/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5331fcbf7daa72bbe8d627667a57f81716bed82cfa9e39304d68f21f76c87/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd5331fcbf7daa72bbe8d627667a57f81716bed82cfa9e39304d68f21f76c87/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:59.16147266 +0000 UTC m=+0.326969264 container init 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, container_name=nova_statedir_owner, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public)
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:59.170881671 +0000 UTC m=+0.336378305 container start 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_statedir_owner, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step3, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:59.171030895 +0000 UTC m=+0.336527509 container attach 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public)
Nov 23 03:06:59 localhost podman[61049]: 2025-11-23 08:06:59.202309891 +0000 UTC m=+0.041263314 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, managed_by=tripleo_ansible)
Nov 23 03:06:59 localhost systemd[1]: libpod-501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa.scope: Deactivated successfully.
Nov 23 03:06:59 localhost podman[60979]: 2025-11-23 08:06:59.231616875 +0000 UTC m=+0.226962169 container cleanup 02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:06:59 localhost systemd[1]: libpod-conmon-02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc.scope: Deactivated successfully.
Nov 23 03:06:59 localhost systemd[61003]: Queued start job for default target Main User Target.
Nov 23 03:06:59 localhost systemd[61003]: Created slice User Application Slice.
Nov 23 03:06:59 localhost systemd[61003]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 03:06:59 localhost systemd[61003]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 03:06:59 localhost systemd[61003]: Reached target Paths.
Nov 23 03:06:59 localhost systemd[61003]: Reached target Timers.
Nov 23 03:06:59 localhost systemd[61003]: Starting D-Bus User Message Bus Socket...
Nov 23 03:06:59 localhost systemd[61003]: Starting Create User's Volatile Files and Directories...
Nov 23 03:06:59 localhost systemd[61003]: Listening on D-Bus User Message Bus Socket.
Nov 23 03:06:59 localhost systemd[61003]: Reached target Sockets.
Nov 23 03:06:59 localhost systemd[61003]: Finished Create User's Volatile Files and Directories.
Nov 23 03:06:59 localhost systemd[61003]: Reached target Basic System.
Nov 23 03:06:59 localhost systemd[61003]: Reached target Main User Target.
Nov 23 03:06:59 localhost systemd[61003]: Startup finished in 136ms.
Nov 23 03:06:59 localhost systemd[1]: Started User Manager for UID 0.
Nov 23 03:06:59 localhost podman[60927]: 2025-11-23 08:06:59.267158706 +0000 UTC m=+0.432655340 container died 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Nov 23 03:06:59 localhost systemd[1]: Started Session c1 of User root.
Nov 23 03:06:59 localhost systemd[1]: Started Session c2 of User root.
Nov 23 03:06:59 localhost podman[61049]: 2025-11-23 08:06:59.275785336 +0000 UTC m=+0.114738769 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, container_name=rsyslog, architecture=x86_64, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team)
Nov 23 03:06:59 localhost podman[60899]: 2025-11-23 08:06:59.282393582 +0000 UTC m=+0.480348353 container start 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com)
Nov 23 03:06:59 localhost systemd[1]: libpod-conmon-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:06:59 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Nov 23 03:06:59 localhost systemd[1]: session-c2.scope: Deactivated successfully.
Nov 23 03:06:59 localhost systemd[1]: session-c1.scope: Deactivated successfully.
Nov 23 03:06:59 localhost podman[61083]: 2025-11-23 08:06:59.455159612 +0000 UTC m=+0.224439572 container cleanup 501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true)
Nov 23 03:06:59 localhost systemd[1]: libpod-conmon-501b323fe30eb535c82325aa58819050bafd56484e9615dd371b68bcbdac05fa.scope: Deactivated successfully.
Nov 23 03:06:59 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Nov 23 03:06:59 localhost podman[61024]: 2025-11-23 08:06:59.32197052 +0000 UTC m=+0.183297901 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, distribution-scope=public)
Nov 23 03:06:59 localhost podman[61024]: 2025-11-23 08:06:59.557317603 +0000 UTC m=+0.418645054 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 03:06:59 localhost podman[61024]: unhealthy
Nov 23 03:06:59 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:06:59 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Failed with result 'exit-code'.
Nov 23 03:06:59 localhost podman[61231]: 2025-11-23 08:06:59.713993662 +0000 UTC m=+0.090035898 container create ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.expose-services=)
Nov 23 03:06:59 localhost systemd[1]: Started libpod-conmon-ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945.scope.
Nov 23 03:06:59 localhost systemd[1]: Started libcrun container.
Nov 23 03:06:59 localhost podman[61231]: 2025-11-23 08:06:59.679003027 +0000 UTC m=+0.055045333 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef10dbd8e654e3d33969e8a1d0f6410664da7d48e949f4169105b8cba2a78b06/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef10dbd8e654e3d33969e8a1d0f6410664da7d48e949f4169105b8cba2a78b06/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef10dbd8e654e3d33969e8a1d0f6410664da7d48e949f4169105b8cba2a78b06/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef10dbd8e654e3d33969e8a1d0f6410664da7d48e949f4169105b8cba2a78b06/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:06:59 localhost podman[61231]: 2025-11-23 08:06:59.789564632 +0000 UTC m=+0.165606878 container init ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 03:06:59 localhost podman[61231]: 2025-11-23 08:06:59.798231704 +0000 UTC m=+0.174273950 container start ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:35:22Z)
Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay-49811fcc3e5d752fe49ab74a12b54f8b5604be5b8ba1bcaf72dfc24524c4f335-merged.mount: Deactivated successfully.
Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02694fc8422fd410a35e785df371f82f6796384e61511a363ee3c5f2984403dc-userdata-shm.mount: Deactivated successfully.
Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay-8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088-merged.mount: Deactivated successfully.
Nov 23 03:06:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642-userdata-shm.mount: Deactivated successfully.
Nov 23 03:07:00 localhost podman[61298]: 2025-11-23 08:07:00.010480339 +0000 UTC m=+0.084813329 container create 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_virtsecretd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z)
Nov 23 03:07:00 localhost systemd[1]: Started libpod-conmon-71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5.scope.
Nov 23 03:07:00 localhost podman[61298]: 2025-11-23 08:06:59.965864096 +0000 UTC m=+0.040197116 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:00 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost podman[61298]: 2025-11-23 08:07:00.106383033 +0000 UTC m=+0.180716023 container init 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtsecretd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:07:00 localhost podman[61298]: 2025-11-23 08:07:00.115753584 +0000 UTC m=+0.190086564 container start 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044)
Nov 23 03:07:00 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:00 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:00 localhost systemd[1]: Started Session c3 of User root.
Nov 23 03:07:00 localhost systemd[1]: session-c3.scope: Deactivated successfully.
Nov 23 03:07:00 localhost podman[61431]: 2025-11-23 08:07:00.604280965 +0000 UTC m=+0.071369130 container create aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Nov 23 03:07:00 localhost systemd[1]: Started libpod-conmon-aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.scope.
Nov 23 03:07:00 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff2a78c8cc62b07a928d0b2b3f68754d6aca28a37f592a56866830a4a003509/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff2a78c8cc62b07a928d0b2b3f68754d6aca28a37f592a56866830a4a003509/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost podman[61431]: 2025-11-23 08:07:00.574798706 +0000 UTC m=+0.041886881 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Nov 23 03:07:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:07:00 localhost podman[61448]: 2025-11-23 08:07:00.679604559 +0000 UTC m=+0.099800890 container create 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtnodedevd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Nov 23 03:07:00 localhost podman[61431]: 2025-11-23 08:07:00.682404574 +0000 UTC m=+0.149492769 container init aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:07:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:07:00 localhost podman[61431]: 2025-11-23 08:07:00.706102517 +0000 UTC m=+0.173190682 container start aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, release=1761123044)
Nov 23 03:07:00 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:00 localhost systemd[1]: Started libpod-conmon-6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b.scope.
Nov 23 03:07:00 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=83ab5b37680071f0941108e43c518cc1 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Nov 23 03:07:00 localhost systemd[1]: Started Session c4 of User root.
Nov 23 03:07:00 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost podman[61448]: 2025-11-23 08:07:00.633076074 +0000 UTC m=+0.053272385 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:00 localhost podman[61448]: 2025-11-23 08:07:00.736608522 +0000 UTC m=+0.156804853 container init 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, container_name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 03:07:00 localhost podman[61448]: 2025-11-23 08:07:00.744933995 +0000 UTC m=+0.165130326 container start 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtnodedevd, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt)
Nov 23 03:07:00 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:00 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:00 localhost systemd[1]: Started Session c5 of User root.
Nov 23 03:07:00 localhost systemd[1]: session-c4.scope: Deactivated successfully.
Nov 23 03:07:00 localhost podman[61471]: 2025-11-23 08:07:00.814222028 +0000 UTC m=+0.100401226 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1761123044)
Nov 23 03:07:00 localhost kernel: Loading iSCSI transport class v2.0-870.
Nov 23 03:07:00 localhost systemd[1]: session-c5.scope: Deactivated successfully.
Nov 23 03:07:00 localhost podman[61471]: 2025-11-23 08:07:00.919123852 +0000 UTC m=+0.205303030 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:07:00 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:07:01 localhost podman[61611]: 2025-11-23 08:07:01.341345471 +0000 UTC m=+0.067588868 container create bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=nova_virtstoraged, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Nov 23 03:07:01 localhost systemd[1]: Started libpod-conmon-bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071.scope.
Nov 23 03:07:01 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost podman[61611]: 2025-11-23 08:07:01.405966169 +0000 UTC m=+0.132209546 container init bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z)
Nov 23 03:07:01 localhost podman[61611]: 2025-11-23 08:07:01.308720608 +0000 UTC m=+0.034963995 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:01 localhost podman[61611]: 2025-11-23 08:07:01.414866666 +0000 UTC m=+0.141110043 container start bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20251118.1, container_name=nova_virtstoraged, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Nov 23 03:07:01 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:01 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:01 localhost systemd[1]: Started Session c6 of User root.
Nov 23 03:07:01 localhost systemd[1]: session-c6.scope: Deactivated successfully.
Nov 23 03:07:01 localhost sshd[61725]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:07:01 localhost podman[61714]: 2025-11-23 08:07:01.894953413 +0000 UTC m=+0.084728337 container create 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, container_name=nova_virtqemud, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 03:07:01 localhost systemd[1]: Started libpod-conmon-65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690.scope.
Nov 23 03:07:01 localhost podman[61714]: 2025-11-23 08:07:01.849847577 +0000 UTC m=+0.039622561 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:01 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:01 localhost podman[61714]: 2025-11-23 08:07:01.972265299 +0000 UTC m=+0.162040223 container init 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12)
Nov 23 03:07:01 localhost podman[61714]: 2025-11-23 08:07:01.981754093 +0000 UTC m=+0.171529027 container start 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 03:07:01 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:02 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:02 localhost systemd[1]: Started Session c7 of User root.
Nov 23 03:07:02 localhost systemd[1]: session-c7.scope: Deactivated successfully.
Nov 23 03:07:02 localhost podman[61819]: 2025-11-23 08:07:02.447237768 +0000 UTC m=+0.083137364 container create ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtproxyd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1)
Nov 23 03:07:02 localhost systemd[1]: Started libpod-conmon-ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824.scope.
Nov 23 03:07:02 localhost podman[61819]: 2025-11-23 08:07:02.401694491 +0000 UTC m=+0.037594097 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:02 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:02 localhost podman[61819]: 2025-11-23 08:07:02.516255033 +0000 UTC m=+0.152154619 container init ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-type=git, architecture=x86_64, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtproxyd, release=1761123044, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Nov 23 03:07:02 localhost podman[61819]: 2025-11-23 08:07:02.52545617 +0000 UTC m=+0.161355756 container start ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, architecture=x86_64)
Nov 23 03:07:02 localhost python3[60719]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:07:02 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:07:02 localhost systemd[1]: Started Session c8 of User root.
Nov 23 03:07:02 localhost systemd[1]: session-c8.scope: Deactivated successfully.
Nov 23 03:07:03 localhost python3[61901]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:03 localhost python3[61917]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:03 localhost python3[61933]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:03 localhost python3[61949]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:04 localhost python3[61965]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:04 localhost python3[61981]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:04 localhost python3[61997]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:04 localhost python3[62013]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:05 localhost python3[62029]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:07:05 localhost systemd[1]: tmp-crun.llqFtT.mount: Deactivated successfully.
Nov 23 03:07:05 localhost podman[62046]: 2025-11-23 08:07:05.430193162 +0000 UTC m=+0.096020359 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, url=https://www.redhat.com, release=1761123044, container_name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:07:05 localhost python3[62045]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:05 localhost podman[62046]: 2025-11-23 08:07:05.654148819 +0000 UTC m=+0.319976046 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Nov 23 03:07:05 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:07:05 localhost python3[62090]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:05 localhost python3[62106]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:06 localhost python3[62122]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:06 localhost python3[62138]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:06 localhost python3[62154]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:07 localhost python3[62170]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:07 localhost python3[62186]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:07 localhost python3[62202]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:07:08 localhost python3[62263]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:08 localhost python3[62292]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:09 localhost python3[62321]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:09 localhost python3[62350]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:10 localhost python3[62379]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:10 localhost python3[62408]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:11 localhost python3[62437]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:11 localhost python3[62466]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:12 localhost python3[62495]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885227.616124-100556-220111033309621/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:12 localhost python3[62511]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 03:07:12 localhost systemd[1]: Reloading.
Nov 23 03:07:12 localhost systemd-rc-local-generator[62537]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:12 localhost systemd-sysv-generator[62542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:13 localhost systemd[1]: Stopping User Manager for UID 0...
Nov 23 03:07:13 localhost systemd[61003]: Activating special unit Exit the Session...
Nov 23 03:07:13 localhost systemd[61003]: Stopped target Main User Target.
Nov 23 03:07:13 localhost systemd[61003]: Stopped target Basic System.
Nov 23 03:07:13 localhost systemd[61003]: Stopped target Paths.
Nov 23 03:07:13 localhost systemd[61003]: Stopped target Sockets.
Nov 23 03:07:13 localhost systemd[61003]: Stopped target Timers.
Nov 23 03:07:13 localhost systemd[61003]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 03:07:13 localhost systemd[61003]: Closed D-Bus User Message Bus Socket.
Nov 23 03:07:13 localhost systemd[61003]: Stopped Create User's Volatile Files and Directories.
Nov 23 03:07:13 localhost systemd[61003]: Removed slice User Application Slice.
Nov 23 03:07:13 localhost systemd[61003]: Reached target Shutdown.
Nov 23 03:07:13 localhost systemd[61003]: Finished Exit the Session.
Nov 23 03:07:13 localhost systemd[61003]: Reached target Exit the Session.
Nov 23 03:07:13 localhost systemd[1]: user@0.service: Deactivated successfully.
Nov 23 03:07:13 localhost systemd[1]: Stopped User Manager for UID 0.
Nov 23 03:07:13 localhost systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 03:07:13 localhost systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 03:07:13 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 03:07:13 localhost systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 03:07:13 localhost systemd[1]: Removed slice User Slice of UID 0.
Nov 23 03:07:13 localhost python3[62565]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:13 localhost systemd[1]: Reloading.
Nov 23 03:07:13 localhost systemd-rc-local-generator[62588]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:13 localhost systemd-sysv-generator[62593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:13 localhost systemd[1]: Starting collectd container...
Nov 23 03:07:14 localhost systemd[1]: Started collectd container.
Nov 23 03:07:14 localhost python3[62632]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:14 localhost systemd[1]: Reloading.
Nov 23 03:07:14 localhost systemd-rc-local-generator[62657]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:14 localhost systemd-sysv-generator[62663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:14 localhost systemd[1]: Starting iscsid container...
Nov 23 03:07:15 localhost systemd[1]: Started iscsid container.
Nov 23 03:07:15 localhost python3[62698]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:15 localhost systemd[1]: Reloading.
Nov 23 03:07:15 localhost systemd-sysv-generator[62727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:15 localhost systemd-rc-local-generator[62723]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:15 localhost systemd[1]: Starting nova_virtlogd_wrapper container...
Nov 23 03:07:16 localhost systemd[1]: Started nova_virtlogd_wrapper container.
Nov 23 03:07:16 localhost python3[62764]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:16 localhost systemd[1]: Reloading.
Nov 23 03:07:16 localhost systemd-rc-local-generator[62788]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:16 localhost systemd-sysv-generator[62794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:17 localhost systemd[1]: Starting nova_virtnodedevd container...
Nov 23 03:07:17 localhost tripleo-start-podman-container[62804]: Creating additional drop-in dependency for "nova_virtnodedevd" (6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b)
Nov 23 03:07:17 localhost systemd[1]: Reloading.
Nov 23 03:07:17 localhost systemd-rc-local-generator[62858]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:17 localhost systemd-sysv-generator[62862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:17 localhost systemd[1]: Started nova_virtnodedevd container.
Nov 23 03:07:18 localhost python3[62887]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:19 localhost systemd[1]: Reloading.
Nov 23 03:07:19 localhost systemd-sysv-generator[62917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:19 localhost systemd-rc-local-generator[62914]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:19 localhost systemd[1]: Starting nova_virtproxyd container...
Nov 23 03:07:19 localhost tripleo-start-podman-container[62927]: Creating additional drop-in dependency for "nova_virtproxyd" (ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824)
Nov 23 03:07:19 localhost systemd[1]: Reloading.
Nov 23 03:07:19 localhost systemd-sysv-generator[62991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:19 localhost systemd-rc-local-generator[62987]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:20 localhost systemd[1]: Started nova_virtproxyd container.
Nov 23 03:07:20 localhost python3[63012]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:20 localhost systemd[1]: Reloading.
Nov 23 03:07:20 localhost systemd-rc-local-generator[63038]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:20 localhost systemd-sysv-generator[63043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:21 localhost systemd[1]: Starting nova_virtqemud container...
Nov 23 03:07:21 localhost tripleo-start-podman-container[63051]: Creating additional drop-in dependency for "nova_virtqemud" (65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690)
Nov 23 03:07:21 localhost systemd[1]: Reloading.
Nov 23 03:07:21 localhost systemd-sysv-generator[63113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:21 localhost systemd-rc-local-generator[63108]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:21 localhost systemd[1]: Started nova_virtqemud container.
Nov 23 03:07:22 localhost python3[63137]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:22 localhost systemd[1]: Reloading.
Nov 23 03:07:22 localhost systemd-rc-local-generator[63161]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:22 localhost systemd-sysv-generator[63165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:22 localhost systemd[1]: Starting dnf makecache...
Nov 23 03:07:22 localhost systemd[1]: Starting nova_virtsecretd container...
Nov 23 03:07:22 localhost tripleo-start-podman-container[63177]: Creating additional drop-in dependency for "nova_virtsecretd" (71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5)
Nov 23 03:07:22 localhost systemd[1]: Reloading.
Nov 23 03:07:22 localhost dnf[63175]: Updating Subscription Management repositories.
Nov 23 03:07:22 localhost systemd-rc-local-generator[63230]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:22 localhost systemd-sysv-generator[63234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:23 localhost systemd[1]: Started nova_virtsecretd container.
Nov 23 03:07:23 localhost python3[63260]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:23 localhost systemd[1]: Reloading.
Nov 23 03:07:23 localhost systemd-sysv-generator[63290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:23 localhost systemd-rc-local-generator[63285]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:23 localhost systemd[1]: Starting nova_virtstoraged container...
Nov 23 03:07:24 localhost tripleo-start-podman-container[63300]: Creating additional drop-in dependency for "nova_virtstoraged" (bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071)
Nov 23 03:07:24 localhost systemd[1]: Reloading.
Nov 23 03:07:24 localhost systemd-sysv-generator[63363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:24 localhost systemd-rc-local-generator[63358]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:24 localhost systemd[1]: Started nova_virtstoraged container.
Nov 23 03:07:24 localhost dnf[63175]: Metadata cache refreshed recently.
Nov 23 03:07:24 localhost systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 03:07:24 localhost systemd[1]: Finished dnf makecache.
Nov 23 03:07:24 localhost systemd[1]: dnf-makecache.service: Consumed 2.028s CPU time.
Nov 23 03:07:25 localhost python3[63386]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:07:25 localhost systemd[1]: Reloading.
Nov 23 03:07:25 localhost systemd-sysv-generator[63416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:07:25 localhost systemd-rc-local-generator[63411]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:07:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:07:25 localhost systemd[1]: Starting rsyslog container...
Nov 23 03:07:25 localhost systemd[1]: tmp-crun.iMBnSx.mount: Deactivated successfully.
Nov 23 03:07:25 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:25 localhost podman[63425]: 2025-11-23 08:07:25.620429462 +0000 UTC m=+0.127379558 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:07:25 localhost podman[63425]: 2025-11-23 08:07:25.628089147 +0000 UTC m=+0.135039253 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp17/openstack-rsyslog, container_name=rsyslog, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog)
Nov 23 03:07:25 localhost podman[63425]: rsyslog
Nov 23 03:07:25 localhost systemd[1]: Started rsyslog container.
Nov 23 03:07:25 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:07:25 localhost podman[63457]: 2025-11-23 08:07:25.794900986 +0000 UTC m=+0.056003028 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, vcs-type=git, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.expose-services=)
Nov 23 03:07:25 localhost podman[63457]: 2025-11-23 08:07:25.82195733 +0000 UTC m=+0.083059352 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, vcs-type=git, container_name=rsyslog, version=17.1.12, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Nov 23 03:07:25 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:07:25 localhost podman[63474]: 2025-11-23 08:07:25.909816006 +0000 UTC m=+0.058896183 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.12, vcs-type=git, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Nov 23 03:07:25 localhost podman[63474]: rsyslog
Nov 23 03:07:25 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1.
Nov 23 03:07:26 localhost systemd[1]: Stopped rsyslog container.
Nov 23 03:07:26 localhost systemd[1]: Starting rsyslog container...
Nov 23 03:07:26 localhost python3[63502]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:26 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:26 localhost podman[63503]: 2025-11-23 08:07:26.187829682 +0000 UTC m=+0.097777191 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=rsyslog)
Nov 23 03:07:26 localhost podman[63503]: 2025-11-23 08:07:26.19450534 +0000 UTC m=+0.104452849 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, url=https://www.redhat.com, tcib_managed=true, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Nov 23 03:07:26 localhost podman[63503]: rsyslog
Nov 23 03:07:26 localhost systemd[1]: Started rsyslog container.
Nov 23 03:07:26 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:07:26 localhost podman[63526]: 2025-11-23 08:07:26.356236123 +0000 UTC m=+0.055322893 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:07:26 localhost podman[63526]: 2025-11-23 08:07:26.379316577 +0000 UTC m=+0.078403307 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:07:26 localhost podman[63540]: 2025-11-23 08:07:26.447239163 +0000 UTC m=+0.046086276 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, release=1761123044)
Nov 23 03:07:26 localhost podman[63540]: rsyslog
Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:26 localhost systemd[1]: var-lib-containers-storage-overlay-8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088-merged.mount: Deactivated successfully.
Nov 23 03:07:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642-userdata-shm.mount: Deactivated successfully.
Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2.
Nov 23 03:07:26 localhost systemd[1]: Stopped rsyslog container.
Nov 23 03:07:26 localhost systemd[1]: Starting rsyslog container...
Nov 23 03:07:26 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:26 localhost podman[63552]: 2025-11-23 08:07:26.711245746 +0000 UTC m=+0.123443453 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:07:26 localhost podman[63552]: 2025-11-23 08:07:26.721432448 +0000 UTC m=+0.133630145 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, release=1761123044, version=17.1.12, config_id=tripleo_step3, architecture=x86_64, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog)
Nov 23 03:07:26 localhost podman[63552]: rsyslog
Nov 23 03:07:26 localhost systemd[1]: Started rsyslog container.
Nov 23 03:07:26 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:07:26 localhost podman[63594]: 2025-11-23 08:07:26.901388595 +0000 UTC m=+0.064450816 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3)
Nov 23 03:07:26 localhost podman[63594]: 2025-11-23 08:07:26.931407304 +0000 UTC m=+0.094469495 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z)
Nov 23 03:07:26 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:07:27 localhost podman[63636]: 2025-11-23 08:07:27.028687222 +0000 UTC m=+0.065884154 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:07:27 localhost podman[63636]: rsyslog
Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3.
Nov 23 03:07:27 localhost systemd[1]: Stopped rsyslog container.
Nov 23 03:07:27 localhost systemd[1]: Starting rsyslog container...
Nov 23 03:07:27 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:27 localhost podman[63690]: 2025-11-23 08:07:27.456595794 +0000 UTC m=+0.113417867 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-rsyslog-container, distribution-scope=public)
Nov 23 03:07:27 localhost podman[63690]: 2025-11-23 08:07:27.465958174 +0000 UTC m=+0.122780257 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog)
Nov 23 03:07:27 localhost podman[63690]: rsyslog
Nov 23 03:07:27 localhost systemd[1]: Started rsyslog container.
Nov 23 03:07:27 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:07:27 localhost podman[63727]: 2025-11-23 08:07:27.630643924 +0000 UTC m=+0.050337450 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:07:27 localhost systemd[1]: tmp-crun.1sQ5tN.mount: Deactivated successfully.
Nov 23 03:07:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642-userdata-shm.mount: Deactivated successfully.
Nov 23 03:07:27 localhost systemd[1]: var-lib-containers-storage-overlay-8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088-merged.mount: Deactivated successfully.
Nov 23 03:07:27 localhost podman[63727]: 2025-11-23 08:07:27.655200738 +0000 UTC m=+0.074894204 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=rsyslog, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:07:27 localhost podman[63739]: 2025-11-23 08:07:27.748832809 +0000 UTC m=+0.062928505 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, container_name=rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-rsyslog, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 03:07:27 localhost podman[63739]: rsyslog
Nov 23 03:07:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:27 localhost python3[63767]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005532586 step=3 update_config_hash_only=False
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4.
Nov 23 03:07:28 localhost systemd[1]: Stopped rsyslog container.
Nov 23 03:07:28 localhost systemd[1]: Starting rsyslog container...
Nov 23 03:07:28 localhost systemd[1]: Started libcrun container.
Nov 23 03:07:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Nov 23 03:07:28 localhost podman[63769]: 2025-11-23 08:07:28.217572639 +0000 UTC m=+0.119918462 container init 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, distribution-scope=public, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Nov 23 03:07:28 localhost podman[63769]: 2025-11-23 08:07:28.2266668 +0000 UTC m=+0.129012613 container start 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Nov 23 03:07:28 localhost podman[63769]: rsyslog
Nov 23 03:07:28 localhost systemd[1]: Started rsyslog container.
Nov 23 03:07:28 localhost systemd[1]: libpod-9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642.scope: Deactivated successfully.
Nov 23 03:07:28 localhost podman[63808]: 2025-11-23 08:07:28.392134482 +0000 UTC m=+0.050131615 container died 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, name=rhosp17/openstack-rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog)
Nov 23 03:07:28 localhost podman[63808]: 2025-11-23 08:07:28.414820816 +0000 UTC m=+0.072817919 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.openshift.expose-services=, container_name=rsyslog, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public)
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:07:28 localhost python3[63806]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:07:28 localhost podman[63821]: 2025-11-23 08:07:28.500389332 +0000 UTC m=+0.056068422 container cleanup 9c36b2f0babf1185e05cd0a841e96ed21d04830634a4d357b4e99e40febb0642 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ea58633c99f05090f3faea662c628ca'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:49Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog)
Nov 23 03:07:28 localhost podman[63821]: rsyslog
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5.
Nov 23 03:07:28 localhost systemd[1]: Stopped rsyslog container.
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly.
Nov 23 03:07:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Nov 23 03:07:28 localhost systemd[1]: Failed to start rsyslog container.
Nov 23 03:07:28 localhost python3[63847]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 03:07:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:07:30 localhost podman[63848]: 2025-11-23 08:07:30.180818196 +0000 UTC m=+0.089468091 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:07:30 localhost podman[63848]: 2025-11-23 08:07:30.193093993 +0000 UTC m=+0.101743908 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:07:30 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:07:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:07:31 localhost podman[63869]: 2025-11-23 08:07:31.170619127 +0000 UTC m=+0.076554246 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Nov 23 03:07:31 localhost podman[63869]: 2025-11-23 08:07:31.209001789 +0000 UTC m=+0.114936898 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, release=1761123044)
Nov 23 03:07:31 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:07:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:07:36 localhost podman[63888]: 2025-11-23 08:07:36.176805365 +0000 UTC m=+0.078821768 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1)
Nov 23 03:07:36 localhost podman[63888]: 2025-11-23 08:07:36.375151672 +0000 UTC m=+0.277168055 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:07:36 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:08:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:08:01 localhost systemd[1]: tmp-crun.ZSiZ34.mount: Deactivated successfully.
Nov 23 03:08:01 localhost podman[63994]: 2025-11-23 08:08:01.194991464 +0000 UTC m=+0.097185447 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:08:01 localhost podman[63994]: 2025-11-23 08:08:01.21026249 +0000 UTC m=+0.112456443 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible)
Nov 23 03:08:01 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:08:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:08:01 localhost podman[64015]: 2025-11-23 08:08:01.341335757 +0000 UTC m=+0.084187431 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 03:08:01 localhost podman[64015]: 2025-11-23 08:08:01.356913542 +0000 UTC m=+0.099765236 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step3)
Nov 23 03:08:01 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:08:07 localhost podman[64033]: 2025-11-23 08:08:07.168242959 +0000 UTC m=+0.076452275 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:08:07 localhost podman[64033]: 2025-11-23 08:08:07.360029381 +0000 UTC m=+0.268238727 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, release=1761123044, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Nov 23 03:08:07 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:08:21 localhost sshd[64061]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:08:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:08:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:08:32 localhost systemd[1]: tmp-crun.F9kuQK.mount: Deactivated successfully.
Nov 23 03:08:32 localhost systemd[1]: tmp-crun.MFMeXo.mount: Deactivated successfully.
Nov 23 03:08:32 localhost podman[64063]: 2025-11-23 08:08:32.220413902 +0000 UTC m=+0.125995713 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, distribution-scope=public, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z)
Nov 23 03:08:32 localhost podman[64063]: 2025-11-23 08:08:32.228358184 +0000 UTC m=+0.133939975 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=)
Nov 23 03:08:32 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:08:32 localhost podman[64064]: 2025-11-23 08:08:32.191616786 +0000 UTC m=+0.095851561 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true)
Nov 23 03:08:32 localhost podman[64064]: 2025-11-23 08:08:32.273949836 +0000 UTC m=+0.178184561 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3)
Nov 23 03:08:32 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:08:38 localhost podman[64100]: 2025-11-23 08:08:38.177020354 +0000 UTC m=+0.084129039 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, release=1761123044, distribution-scope=public, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com)
Nov 23 03:08:38 localhost podman[64100]: 2025-11-23 08:08:38.413984519 +0000 UTC m=+0.321093194 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true)
Nov 23 03:08:38 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:09:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:09:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:09:03 localhost podman[64208]: 2025-11-23 08:09:03.183576805 +0000 UTC m=+0.083446230 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid)
Nov 23 03:09:03 localhost podman[64208]: 2025-11-23 08:09:03.19878854 +0000 UTC m=+0.098657965 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.12, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1)
Nov 23 03:09:03 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:09:03 localhost podman[64207]: 2025-11-23 08:09:03.285676261 +0000 UTC m=+0.185844844 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:09:03 localhost podman[64207]: 2025-11-23 08:09:03.323920059 +0000 UTC m=+0.224088642 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:09:03 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:09:09 localhost systemd[1]: tmp-crun.90zjPP.mount: Deactivated successfully.
Nov 23 03:09:09 localhost podman[64247]: 2025-11-23 08:09:09.179977135 +0000 UTC m=+0.085377032 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:09:09 localhost podman[64247]: 2025-11-23 08:09:09.399964058 +0000 UTC m=+0.305363955 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible)
Nov 23 03:09:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:09:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:09:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:09:34 localhost podman[64277]: 2025-11-23 08:09:34.165740671 +0000 UTC m=+0.068480932 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:09:34 localhost systemd[1]: tmp-crun.ggI1iR.mount: Deactivated successfully.
Nov 23 03:09:34 localhost podman[64276]: 2025-11-23 08:09:34.222931303 +0000 UTC m=+0.128048207 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:09:34 localhost podman[64276]: 2025-11-23 08:09:34.230831113 +0000 UTC m=+0.135947997 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:09:34 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:09:34 localhost podman[64277]: 2025-11-23 08:09:34.25062185 +0000 UTC m=+0.153362001 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vcs-type=git)
Nov 23 03:09:34 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:09:40 localhost podman[64314]: 2025-11-23 08:09:40.187419193 +0000 UTC m=+0.091118645 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Nov 23 03:09:40 localhost podman[64314]: 2025-11-23 08:09:40.418946042 +0000 UTC m=+0.322645484 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:09:40 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:09:42 localhost sshd[64343]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:10:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:10:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:10:05 localhost podman[64421]: 2025-11-23 08:10:05.176507948 +0000 UTC m=+0.081171010 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, release=1761123044, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, container_name=collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Nov 23 03:10:05 localhost podman[64421]: 2025-11-23 08:10:05.19198588 +0000 UTC m=+0.096648952 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=)
Nov 23 03:10:05 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:10:05 localhost podman[64422]: 2025-11-23 08:10:05.274417923 +0000 UTC m=+0.175474460 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:10:05 localhost podman[64422]: 2025-11-23 08:10:05.285831647 +0000 UTC m=+0.186888244 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Nov 23 03:10:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:10:11 localhost podman[64457]: 2025-11-23 08:10:11.179452852 +0000 UTC m=+0.089355718 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, architecture=x86_64, container_name=metrics_qdr, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Nov 23 03:10:11 localhost podman[64457]: 2025-11-23 08:10:11.403909003 +0000 UTC m=+0.313811829 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Nov 23 03:10:11 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:10:36 localhost systemd[1]: tmp-crun.e9yPaR.mount: Deactivated successfully.
Nov 23 03:10:36 localhost podman[64487]: 2025-11-23 08:10:36.188271391 +0000 UTC m=+0.089935923 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, container_name=iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-type=git)
Nov 23 03:10:36 localhost podman[64487]: 2025-11-23 08:10:36.227954537 +0000 UTC m=+0.129619109 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, tcib_managed=true)
Nov 23 03:10:36 localhost podman[64486]: 2025-11-23 08:10:36.234887101 +0000 UTC m=+0.138963218 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.4)
Nov 23 03:10:36 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:10:36 localhost podman[64486]: 2025-11-23 08:10:36.270084377 +0000 UTC m=+0.174160484 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Nov 23 03:10:36 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:10:42 localhost podman[64527]: 2025-11-23 08:10:42.173585266 +0000 UTC m=+0.082301500 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Nov 23 03:10:42 localhost podman[64527]: 2025-11-23 08:10:42.354408736 +0000 UTC m=+0.263124900 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:10:42 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:11:00 localhost sshd[64684]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:11:07 localhost podman[64686]: 2025-11-23 08:11:07.18369395 +0000 UTC m=+0.091813984 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044)
Nov 23 03:11:07 localhost podman[64687]: 2025-11-23 08:11:07.221979017 +0000 UTC m=+0.126204978 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, tcib_managed=true, version=17.1.12)
Nov 23 03:11:07 localhost podman[64686]: 2025-11-23 08:11:07.243670375 +0000 UTC m=+0.151790409 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:11:07 localhost podman[64687]: 2025-11-23 08:11:07.257963155 +0000 UTC m=+0.162189116 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1761123044)
Nov 23 03:11:07 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:11:07 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:11:13 localhost podman[64726]: 2025-11-23 08:11:13.186538721 +0000 UTC m=+0.084531240 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public)
Nov 23 03:11:13 localhost podman[64726]: 2025-11-23 08:11:13.369980772 +0000 UTC m=+0.267973381 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:11:13 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:11:24 localhost python3[64802]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:25 localhost python3[64847]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885484.2799594-107581-62278253988302/source _original_basename=tmpqvs8ljtf follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:26 localhost python3[64909]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:26 localhost python3[64952]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885485.7811832-107677-276303955953569/source _original_basename=tmpsio9k44c follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:27 localhost python3[65014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:27 localhost python3[65057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885486.7048225-107772-225516050623661/source _original_basename=tmpc0ezjmvc follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:27 localhost python3[65119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:28 localhost python3[65162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885487.632247-107834-124286746412201/source _original_basename=tmp0s1e9tf1 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:28 localhost python3[65192]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 03:11:28 localhost systemd[1]: Reloading.
Nov 23 03:11:28 localhost systemd-rc-local-generator[65214]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:28 localhost systemd-sysv-generator[65218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:29 localhost systemd[1]: Reloading.
Nov 23 03:11:29 localhost systemd-rc-local-generator[65257]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:29 localhost systemd-sysv-generator[65261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:29 localhost python3[65282]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:11:29 localhost systemd[1]: Reloading.
Nov 23 03:11:30 localhost systemd-rc-local-generator[65304]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:30 localhost systemd-sysv-generator[65309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:30 localhost systemd[1]: Reloading.
Nov 23 03:11:30 localhost systemd-rc-local-generator[65343]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:30 localhost systemd-sysv-generator[65346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:30 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Nov 23 03:11:30 localhost python3[65373]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:11:30 localhost systemd[1]: Reloading.
Nov 23 03:11:30 localhost systemd-sysv-generator[65403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:30 localhost systemd-rc-local-generator[65398]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:31 localhost python3[65457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:32 localhost python3[65500]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885491.3249404-107958-92038907401392/source _original_basename=tmp4g6fsjwd follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:32 localhost python3[65530]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:11:32 localhost systemd[1]: Reloading.
Nov 23 03:11:32 localhost systemd-rc-local-generator[65554]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:32 localhost systemd-sysv-generator[65558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:32 localhost systemd[1]: Reached target tripleo_nova_libvirt.target.
Nov 23 03:11:33 localhost python3[65585]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:11:34 localhost ansible-async_wrapper.py[65757]: Invoked with 622979106185 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885494.220328-108064-172428258665809/AnsiballZ_command.py _
Nov 23 03:11:34 localhost ansible-async_wrapper.py[65760]: Starting module and watcher
Nov 23 03:11:34 localhost ansible-async_wrapper.py[65760]: Start watching 65761 (3600)
Nov 23 03:11:34 localhost ansible-async_wrapper.py[65761]: Start module (65761)
Nov 23 03:11:34 localhost ansible-async_wrapper.py[65757]: Return async_wrapper task started.
Nov 23 03:11:35 localhost python3[65781]: ansible-ansible.legacy.async_status Invoked with jid=622979106185.65757 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:11:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:11:37 localhost podman[65832]: 2025-11-23 08:11:37.382361073 +0000 UTC m=+0.092519272 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-collectd, container_name=collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z)
Nov 23 03:11:37 localhost podman[65832]: 2025-11-23 08:11:37.39988546 +0000 UTC m=+0.110043619 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1)
Nov 23 03:11:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:11:37 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:11:37 localhost podman[65856]: 2025-11-23 08:11:37.514558301 +0000 UTC m=+0.080580015 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:11:37 localhost podman[65856]: 2025-11-23 08:11:37.531241635 +0000 UTC m=+0.097263319 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:11:37 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 03:11:38 localhost puppet-user[65780]:   (file: /etc/puppet/hiera.yaml)
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: Undefined variable '::deploy_config_name';
Nov 23 03:11:38 localhost puppet-user[65780]:   (file & line not available)
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 03:11:38 localhost puppet-user[65780]:   (file & line not available)
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:11:38 localhost puppet-user[65780]:                    with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:11:38 localhost puppet-user[65780]:                    with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:11:38 localhost puppet-user[65780]:                    with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:11:38 localhost puppet-user[65780]:                    with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:11:38 localhost puppet-user[65780]:                    with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:11:38 localhost puppet-user[65780]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:11:38 localhost puppet-user[65780]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Nov 23 03:11:38 localhost puppet-user[65780]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.19 seconds
Nov 23 03:11:39 localhost ansible-async_wrapper.py[65760]: 65761 still running (3600)
Nov 23 03:11:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:11:44 localhost podman[65975]: 2025-11-23 08:11:44.168924016 +0000 UTC m=+0.075657585 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, tcib_managed=true)
Nov 23 03:11:44 localhost podman[65975]: 2025-11-23 08:11:44.350946634 +0000 UTC m=+0.257680213 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:11:44 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:11:44 localhost ansible-async_wrapper.py[65760]: 65761 still running (3595)
Nov 23 03:11:45 localhost python3[66050]: ansible-ansible.legacy.async_status Invoked with jid=622979106185.65757 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:11:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 03:11:46 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 03:11:46 localhost systemd[1]: Reloading.
Nov 23 03:11:47 localhost systemd-sysv-generator[66148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:47 localhost systemd-rc-local-generator[66143]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:47 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 03:11:47 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 03:11:47 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 03:11:47 localhost systemd[1]: man-db-cache-update.service: Consumed 1.130s CPU time.
Nov 23 03:11:47 localhost systemd[1]: run-r23356ff9f2374807a1331c551116f7fc.service: Deactivated successfully.
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}c2ffe05f6caebe153641ad28d5f67c5d4f53f50516d0ab53bde6ec38066f15b4'
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Nov 23 03:11:48 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Nov 23 03:11:49 localhost ansible-async_wrapper.py[65760]: 65761 still running (3590)
Nov 23 03:11:53 localhost puppet-user[65780]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Nov 23 03:11:53 localhost systemd[1]: Reloading.
Nov 23 03:11:53 localhost systemd-rc-local-generator[67243]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:53 localhost systemd-sysv-generator[67246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:54 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Nov 23 03:11:54 localhost snmpd[67254]: Can't find directory of RPM packages
Nov 23 03:11:54 localhost snmpd[67254]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Nov 23 03:11:54 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Nov 23 03:11:54 localhost systemd[1]: Reloading.
Nov 23 03:11:54 localhost systemd-rc-local-generator[67282]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:54 localhost systemd-sysv-generator[67286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:54 localhost systemd[1]: Reloading.
Nov 23 03:11:54 localhost ansible-async_wrapper.py[65760]: 65761 still running (3585)
Nov 23 03:11:54 localhost systemd-rc-local-generator[67319]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:11:54 localhost systemd-sysv-generator[67322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:11:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:11:54 localhost puppet-user[65780]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Nov 23 03:11:55 localhost puppet-user[65780]: Notice: Applied catalog in 16.48 seconds
Nov 23 03:11:55 localhost puppet-user[65780]: Application:
Nov 23 03:11:55 localhost puppet-user[65780]:   Initial environment: production
Nov 23 03:11:55 localhost puppet-user[65780]:   Converged environment: production
Nov 23 03:11:55 localhost puppet-user[65780]:         Run mode: user
Nov 23 03:11:55 localhost puppet-user[65780]: Changes:
Nov 23 03:11:55 localhost puppet-user[65780]:            Total: 8
Nov 23 03:11:55 localhost puppet-user[65780]: Events:
Nov 23 03:11:55 localhost puppet-user[65780]:          Success: 8
Nov 23 03:11:55 localhost puppet-user[65780]:            Total: 8
Nov 23 03:11:55 localhost puppet-user[65780]: Resources:
Nov 23 03:11:55 localhost puppet-user[65780]:        Restarted: 1
Nov 23 03:11:55 localhost puppet-user[65780]:          Changed: 8
Nov 23 03:11:55 localhost puppet-user[65780]:      Out of sync: 8
Nov 23 03:11:55 localhost puppet-user[65780]:            Total: 19
Nov 23 03:11:55 localhost puppet-user[65780]: Time:
Nov 23 03:11:55 localhost puppet-user[65780]:       Filebucket: 0.00
Nov 23 03:11:55 localhost puppet-user[65780]:         Schedule: 0.00
Nov 23 03:11:55 localhost puppet-user[65780]:           Augeas: 0.01
Nov 23 03:11:55 localhost puppet-user[65780]:             File: 0.08
Nov 23 03:11:55 localhost puppet-user[65780]:   Config retrieval: 0.25
Nov 23 03:11:55 localhost puppet-user[65780]:          Service: 1.30
Nov 23 03:11:55 localhost puppet-user[65780]:   Transaction evaluation: 16.47
Nov 23 03:11:55 localhost puppet-user[65780]:   Catalog application: 16.48
Nov 23 03:11:55 localhost puppet-user[65780]:         Last run: 1763885515
Nov 23 03:11:55 localhost puppet-user[65780]:             Exec: 5.06
Nov 23 03:11:55 localhost puppet-user[65780]:          Package: 9.85
Nov 23 03:11:55 localhost puppet-user[65780]:            Total: 16.49
Nov 23 03:11:55 localhost puppet-user[65780]: Version:
Nov 23 03:11:55 localhost puppet-user[65780]:           Config: 1763885498
Nov 23 03:11:55 localhost puppet-user[65780]:           Puppet: 7.10.0
Nov 23 03:11:55 localhost ansible-async_wrapper.py[65761]: Module complete (65761)
Nov 23 03:11:55 localhost python3[67358]: ansible-ansible.legacy.async_status Invoked with jid=622979106185.65757 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:11:56 localhost python3[67374]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:11:56 localhost python3[67390]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:11:57 localhost python3[67440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:11:57 localhost python3[67458]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp38bvdmpt recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:11:57 localhost python3[67488]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:11:59 localhost python3[67591]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Nov 23 03:11:59 localhost ansible-async_wrapper.py[65760]: Done in kid B.
Nov 23 03:12:00 localhost python3[67610]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:00 localhost python3[67642]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:01 localhost python3[67692]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5172 writes, 23K keys, 5172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5172 writes, 552 syncs, 9.37 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 64 writes, 96 keys, 64 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s#012Interval WAL: 64 writes, 32 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:12:01 localhost python3[67710]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:02 localhost python3[67772]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:02 localhost python3[67790]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:03 localhost python3[67852]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:03 localhost python3[67870]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:04 localhost python3[67932]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:04 localhost python3[67950]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:04 localhost python3[67980]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:04 localhost systemd[1]: Reloading.
Nov 23 03:12:04 localhost systemd-rc-local-generator[68009]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:04 localhost systemd-sysv-generator[68012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:05 localhost python3[68067]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:05 localhost python3[68085]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4448 writes, 20K keys, 4448 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4448 writes, 502 syncs, 8.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 18 writes, 36 keys, 18 commit groups, 1.0 writes per commit group, ingest: 0.01 MB, 0.00 MB/s#012Interval WAL: 18 writes, 9 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:12:06 localhost python3[68147]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:12:06 localhost python3[68165]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:07 localhost python3[68195]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:07 localhost systemd[1]: Reloading.
Nov 23 03:12:07 localhost systemd-rc-local-generator[68218]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:07 localhost systemd-sysv-generator[68223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:12:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:12:07 localhost systemd[1]: Starting Create netns directory...
Nov 23 03:12:07 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 03:12:07 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 03:12:07 localhost systemd[1]: Finished Create netns directory.
Nov 23 03:12:07 localhost systemd[1]: tmp-crun.SY3DP0.mount: Deactivated successfully.
Nov 23 03:12:07 localhost podman[68233]: 2025-11-23 08:12:07.767187259 +0000 UTC m=+0.105086014 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:12:07 localhost podman[68233]: 2025-11-23 08:12:07.780224063 +0000 UTC m=+0.118122818 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1)
Nov 23 03:12:07 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:12:07 localhost podman[68234]: 2025-11-23 08:12:07.847587527 +0000 UTC m=+0.185107422 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid)
Nov 23 03:12:07 localhost podman[68234]: 2025-11-23 08:12:07.860013236 +0000 UTC m=+0.197533131 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3)
Nov 23 03:12:07 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:12:08 localhost python3[68292]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 03:12:10 localhost python3[68351]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.472473168 +0000 UTC m=+0.061499958 container create 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z)
Nov 23 03:12:10 localhost podman[68515]: 2025-11-23 08:12:10.497111571 +0000 UTC m=+0.089712146 container create e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:12:10 localhost podman[68518]: 2025-11-23 08:12:10.516910385 +0000 UTC m=+0.098969521 container create 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac.scope.
Nov 23 03:12:10 localhost podman[68540]: 2025-11-23 08:12:10.523348386 +0000 UTC m=+0.096625839 container create 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Nov 23 03:12:10 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.scope.
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.439627 +0000 UTC m=+0.028653790 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 03:12:10 localhost podman[68515]: 2025-11-23 08:12:10.438035767 +0000 UTC m=+0.030636372 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Nov 23 03:12:10 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.545019409 +0000 UTC m=+0.134046209 container init 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044)
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb9ea471cf56dc90569406e830c1ff52ff8d53f4c5c1a577e872b53deec2a8c/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.549134628 +0000 UTC m=+0.107172247 container create 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_libvirt_init_secret, version=17.1.12, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 03:12:10 localhost podman[68518]: 2025-11-23 08:12:10.460602964 +0000 UTC m=+0.042662090 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Nov 23 03:12:10 localhost podman[68540]: 2025-11-23 08:12:10.460956544 +0000 UTC m=+0.034234007 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.scope.
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.46722614 +0000 UTC m=+0.025263769 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Nov 23 03:12:10 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:12:10 localhost podman[68515]: 2025-11-23 08:12:10.578441504 +0000 UTC m=+0.171042079 container init e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30b74a826698ec4802daa9c13e3cae799596770c17ebe45666d9bdf35d0eb90/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:12:10 localhost podman[68515]: 2025-11-23 08:12:10.601188276 +0000 UTC m=+0.193788851 container start e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12)
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:12:10 localhost podman[68540]: 2025-11-23 08:12:10.604945806 +0000 UTC m=+0.178223269 container init 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, release=1761123044)
Nov 23 03:12:10 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da5facbcd2df03440dc3d35420cadd63 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.scope.
Nov 23 03:12:10 localhost systemd[1]: Started libpod-conmon-39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f.scope.
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:12:10 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd19aa3bbf1d46933c6539044a339b8340627ee4c1548c2610024703ba9478a8/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost podman[68540]: 2025-11-23 08:12:10.636834659 +0000 UTC m=+0.210112132 container start 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12)
Nov 23 03:12:10 localhost ovs-vsctl[68639]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options
Nov 23 03:12:10 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da5facbcd2df03440dc3d35420cadd63 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Nov 23 03:12:10 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:10 localhost systemd[1]: libpod-27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac.scope: Deactivated successfully.
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.664238345 +0000 UTC m=+0.253265125 container start 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64)
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.664400429 +0000 UTC m=+0.253427239 container attach 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, container_name=configure_cms_options, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:12:10 localhost podman[68518]: 2025-11-23 08:12:10.676647983 +0000 UTC m=+0.258707109 container init 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Nov 23 03:12:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:12:10 localhost podman[68518]: 2025-11-23 08:12:10.701232534 +0000 UTC m=+0.283291660 container start 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public)
Nov 23 03:12:10 localhost podman[68630]: 2025-11-23 08:12:10.704170752 +0000 UTC m=+0.071295488 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:12:10 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.717144606 +0000 UTC m=+0.275182245 container init 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:12:10 localhost podman[68630]: 2025-11-23 08:12:10.718809529 +0000 UTC m=+0.085934295 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container)
Nov 23 03:12:10 localhost podman[68630]: unhealthy
Nov 23 03:12:10 localhost podman[68516]: 2025-11-23 08:12:10.731739262 +0000 UTC m=+0.320766062 container died 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true)
Nov 23 03:12:10 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:12:10 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 03:12:10 localhost podman[68646]: 2025-11-23 08:12:10.77737799 +0000 UTC m=+0.121705742 container cleanup 27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://www.redhat.com)
Nov 23 03:12:10 localhost systemd[1]: libpod-conmon-27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac.scope: Deactivated successfully.
Nov 23 03:12:10 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Nov 23 03:12:10 localhost systemd[1]: libpod-39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f.scope: Deactivated successfully.
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.827591259 +0000 UTC m=+0.385628888 container start 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_libvirt_init_secret, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:12:10 localhost podman[68686]: 2025-11-23 08:12:10.829683635 +0000 UTC m=+0.123554582 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.828519514 +0000 UTC m=+0.386557133 container attach 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible)
Nov 23 03:12:10 localhost podman[68686]: 2025-11-23 08:12:10.864774294 +0000 UTC m=+0.158645261 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, container_name=logrotate_crond, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:12:10 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:12:10 localhost podman[68552]: 2025-11-23 08:12:10.882499263 +0000 UTC m=+0.440536882 container died 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.buildah.version=1.41.4)
Nov 23 03:12:10 localhost podman[68751]: 2025-11-23 08:12:10.982680495 +0000 UTC m=+0.156653918 container cleanup 39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:12:10 localhost systemd[1]: libpod-conmon-39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f.scope: Deactivated successfully.
Nov 23 03:12:11 localhost podman[68606]: 2025-11-23 08:12:11.013377587 +0000 UTC m=+0.408423702 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Nov 23 03:12:11 localhost podman[68606]: 2025-11-23 08:12:11.023025212 +0000 UTC m=+0.418071337 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:12:11 localhost podman[68606]: unhealthy
Nov 23 03:12:11 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:12:11 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 03:12:11 localhost podman[68870]: 2025-11-23 08:12:11.134121294 +0000 UTC m=+0.066066311 container create a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, container_name=setup_ovs_manager, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, vcs-type=git)
Nov 23 03:12:11 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Nov 23 03:12:11 localhost systemd[1]: Started libpod-conmon-a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64.scope.
Nov 23 03:12:11 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:11 localhost podman[68870]: 2025-11-23 08:12:11.104478008 +0000 UTC m=+0.036423055 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Nov 23 03:12:11 localhost podman[68870]: 2025-11-23 08:12:11.205170974 +0000 UTC m=+0.137116011 container init a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=setup_ovs_manager, config_id=tripleo_step4)
Nov 23 03:12:11 localhost podman[68870]: 2025-11-23 08:12:11.213028452 +0000 UTC m=+0.144973489 container start a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:12:11 localhost podman[68870]: 2025-11-23 08:12:11.213222867 +0000 UTC m=+0.145167894 container attach a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=setup_ovs_manager, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, vcs-type=git)
Nov 23 03:12:11 localhost podman[68885]: 2025-11-23 08:12:11.153279691 +0000 UTC m=+0.046618946 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:12:11 localhost podman[68885]: 2025-11-23 08:12:11.256376989 +0000 UTC m=+0.149716244 container create 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_migration_target, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:12:11 localhost systemd[1]: Started libpod-conmon-7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.scope.
Nov 23 03:12:11 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89b45405a25ad5b4e2d46e88df5b29e5f9747842d208330f6b1b95a66e4c65e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:12:11 localhost podman[68885]: 2025-11-23 08:12:11.336617043 +0000 UTC m=+0.229956358 container init 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:12:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:12:11 localhost podman[68885]: 2025-11-23 08:12:11.373972882 +0000 UTC m=+0.267312147 container start 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z)
Nov 23 03:12:11 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:12:11 localhost podman[68937]: 2025-11-23 08:12:11.458371236 +0000 UTC m=+0.074372749 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, version=17.1.12, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay-ae91eec9c362b2c490fa8a14d0f5059208afabdd28cf60783bdf8d722c1b54ce-merged.mount: Deactivated successfully.
Nov 23 03:12:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27f11588432d3e12795d9de0b99e62b5f95920fd5e17bc80549e79f5b6b99fac-userdata-shm.mount: Deactivated successfully.
Nov 23 03:12:11 localhost podman[68937]: 2025-11-23 08:12:11.780911125 +0000 UTC m=+0.396912648 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target)
Nov 23 03:12:11 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:12:12 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 23 03:12:13 localhost ovs-vsctl[69114]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 23 03:12:14 localhost systemd[1]: libpod-a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64.scope: Deactivated successfully.
Nov 23 03:12:14 localhost systemd[1]: libpod-a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64.scope: Consumed 2.910s CPU time.
Nov 23 03:12:14 localhost podman[69115]: 2025-11-23 08:12:14.215146541 +0000 UTC m=+0.052350047 container died a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=setup_ovs_manager, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:14:25Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:12:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64-userdata-shm.mount: Deactivated successfully.
Nov 23 03:12:14 localhost systemd[1]: var-lib-containers-storage-overlay-b77c99c8f5ad929cf9fda4baf1f02e4b486405893a4fe6affbd8bef9d65bdac7-merged.mount: Deactivated successfully.
Nov 23 03:12:14 localhost podman[69115]: 2025-11-23 08:12:14.258822426 +0000 UTC m=+0.096025892 container cleanup a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, container_name=setup_ovs_manager, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:12:14 localhost systemd[1]: libpod-conmon-a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64.scope: Deactivated successfully.
Nov 23 03:12:14 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1763883761 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1763883761'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:12:14 localhost systemd[1]: tmp-crun.FaDS4B.mount: Deactivated successfully.
Nov 23 03:12:14 localhost podman[69159]: 2025-11-23 08:12:14.564472407 +0000 UTC m=+0.086540152 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team)
Nov 23 03:12:14 localhost podman[69159]: 2025-11-23 08:12:14.730888543 +0000 UTC m=+0.252956298 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, batch=17.1_20251118.1)
Nov 23 03:12:14 localhost podman[69249]: 2025-11-23 08:12:14.743441694 +0000 UTC m=+0.086389018 container create 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 03:12:14 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:12:14 localhost podman[69250]: 2025-11-23 08:12:14.768748254 +0000 UTC m=+0.107864486 container create 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible)
Nov 23 03:12:14 localhost systemd[1]: Started libpod-conmon-838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.scope.
Nov 23 03:12:14 localhost podman[69249]: 2025-11-23 08:12:14.694756696 +0000 UTC m=+0.037704060 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 03:12:14 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:14 localhost podman[69250]: 2025-11-23 08:12:14.703863257 +0000 UTC m=+0.042979489 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Nov 23 03:12:14 localhost systemd[1]: Started libpod-conmon-21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.scope.
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6c9a2cd921c771320ccd8c605179d677283c241110a1e27dd4733dcdcf4da2/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6c9a2cd921c771320ccd8c605179d677283c241110a1e27dd4733dcdcf4da2/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6c9a2cd921c771320ccd8c605179d677283c241110a1e27dd4733dcdcf4da2/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71ea05df59e3d7e6164ad14b9e6ef7192ae917ae3414df8fb50b3972aecb677/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71ea05df59e3d7e6164ad14b9e6ef7192ae917ae3414df8fb50b3972aecb677/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71ea05df59e3d7e6164ad14b9e6ef7192ae917ae3414df8fb50b3972aecb677/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:12:14 localhost podman[69249]: 2025-11-23 08:12:14.839871958 +0000 UTC m=+0.182819322 container init 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:12:14 localhost podman[69250]: 2025-11-23 08:12:14.8569526 +0000 UTC m=+0.196068832 container init 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:12:14 localhost podman[69249]: 2025-11-23 08:12:14.880939714 +0000 UTC m=+0.223887028 container start 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 03:12:14 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:12:14 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Nov 23 03:12:14 localhost systemd[1]: Created slice User Slice of UID 0.
Nov 23 03:12:14 localhost systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 03:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:12:14 localhost podman[69250]: 2025-11-23 08:12:14.90799054 +0000 UTC m=+0.247106782 container start 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:12:14 localhost systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 03:12:14 localhost systemd[1]: Starting User Manager for UID 0...
Nov 23 03:12:14 localhost python3[68351]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=8ff67c95922a0236a1e9ce0694abb49c --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Nov 23 03:12:14 localhost podman[69293]: 2025-11-23 08:12:14.98163452 +0000 UTC m=+0.093345612 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public)
Nov 23 03:12:14 localhost podman[69302]: 2025-11-23 08:12:14.994125511 +0000 UTC m=+0.080425261 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Nov 23 03:12:15 localhost podman[69293]: 2025-11-23 08:12:15.018775272 +0000 UTC m=+0.130486324 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:12:15 localhost podman[69302]: 2025-11-23 08:12:15.03188571 +0000 UTC m=+0.118185470 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Nov 23 03:12:15 localhost podman[69302]: unhealthy
Nov 23 03:12:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:12:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:12:15 localhost systemd[69314]: Queued start job for default target Main User Target.
Nov 23 03:12:15 localhost systemd[69314]: Created slice User Application Slice.
Nov 23 03:12:15 localhost systemd[69314]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 03:12:15 localhost systemd[69314]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 03:12:15 localhost systemd[69314]: Reached target Paths.
Nov 23 03:12:15 localhost systemd[69314]: Reached target Timers.
Nov 23 03:12:15 localhost systemd[69314]: Starting D-Bus User Message Bus Socket...
Nov 23 03:12:15 localhost systemd[69314]: Starting Create User's Volatile Files and Directories...
Nov 23 03:12:15 localhost podman[69293]: unhealthy
Nov 23 03:12:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:12:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:12:15 localhost systemd[69314]: Listening on D-Bus User Message Bus Socket.
Nov 23 03:12:15 localhost systemd[69314]: Finished Create User's Volatile Files and Directories.
Nov 23 03:12:15 localhost systemd[69314]: Reached target Sockets.
Nov 23 03:12:15 localhost systemd[69314]: Reached target Basic System.
Nov 23 03:12:15 localhost systemd[69314]: Reached target Main User Target.
Nov 23 03:12:15 localhost systemd[69314]: Startup finished in 125ms.
Nov 23 03:12:15 localhost systemd[1]: Started User Manager for UID 0.
Nov 23 03:12:15 localhost systemd[1]: Started Session c9 of User root.
Nov 23 03:12:15 localhost systemd[1]: session-c9.scope: Deactivated successfully.
Nov 23 03:12:15 localhost kernel: device br-int entered promiscuous mode
Nov 23 03:12:15 localhost NetworkManager[5990]: <info>  [1763885535.2511] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Nov 23 03:12:15 localhost systemd-udevd[69402]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 03:12:15 localhost kernel: device genev_sys_6081 entered promiscuous mode
Nov 23 03:12:15 localhost systemd-udevd[69405]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 03:12:15 localhost NetworkManager[5990]: <info>  [1763885535.2924] device (genev_sys_6081): carrier: link connected
Nov 23 03:12:15 localhost NetworkManager[5990]: <info>  [1763885535.2927] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Nov 23 03:12:15 localhost python3[69427]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:15 localhost python3[69443]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:16 localhost python3[69459]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:17 localhost python3[69478]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:17 localhost sshd[69481]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:12:17 localhost python3[69497]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:17 localhost python3[69514]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:18 localhost python3[69531]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:18 localhost python3[69549]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:18 localhost python3[69565]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:19 localhost python3[69581]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:19 localhost python3[69597]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:19 localhost python3[69613]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:12:20 localhost python3[69674]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:20 localhost python3[69703]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:21 localhost python3[69732]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:21 localhost python3[69761]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:22 localhost python3[69790]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:22 localhost python3[69819]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885539.6707296-109611-184965642808085/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:23 localhost python3[69835]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 03:12:23 localhost systemd[1]: Reloading.
Nov 23 03:12:23 localhost systemd-sysv-generator[69861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:23 localhost systemd-rc-local-generator[69857]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:24 localhost python3[69887]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:24 localhost systemd[1]: Reloading.
Nov 23 03:12:24 localhost systemd-sysv-generator[69917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:24 localhost systemd-rc-local-generator[69912]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:24 localhost systemd[1]: Starting ceilometer_agent_compute container...
Nov 23 03:12:24 localhost tripleo-start-podman-container[69927]: Creating additional drop-in dependency for "ceilometer_agent_compute" (e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931)
Nov 23 03:12:24 localhost systemd[1]: Reloading.
Nov 23 03:12:25 localhost systemd-sysv-generator[69986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:25 localhost systemd-rc-local-generator[69981]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:25 localhost systemd[1]: Started ceilometer_agent_compute container.
Nov 23 03:12:25 localhost systemd[1]: Stopping User Manager for UID 0...
Nov 23 03:12:25 localhost systemd[69314]: Activating special unit Exit the Session...
Nov 23 03:12:25 localhost systemd[69314]: Stopped target Main User Target.
Nov 23 03:12:25 localhost systemd[69314]: Stopped target Basic System.
Nov 23 03:12:25 localhost systemd[69314]: Stopped target Paths.
Nov 23 03:12:25 localhost systemd[69314]: Stopped target Sockets.
Nov 23 03:12:25 localhost systemd[69314]: Stopped target Timers.
Nov 23 03:12:25 localhost systemd[69314]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 03:12:25 localhost systemd[69314]: Closed D-Bus User Message Bus Socket.
Nov 23 03:12:25 localhost systemd[69314]: Stopped Create User's Volatile Files and Directories.
Nov 23 03:12:25 localhost systemd[69314]: Removed slice User Application Slice.
Nov 23 03:12:25 localhost systemd[69314]: Reached target Shutdown.
Nov 23 03:12:25 localhost systemd[69314]: Finished Exit the Session.
Nov 23 03:12:25 localhost systemd[69314]: Reached target Exit the Session.
Nov 23 03:12:25 localhost systemd[1]: user@0.service: Deactivated successfully.
Nov 23 03:12:25 localhost systemd[1]: Stopped User Manager for UID 0.
Nov 23 03:12:25 localhost systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 03:12:25 localhost systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 03:12:25 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 03:12:25 localhost systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 03:12:25 localhost systemd[1]: Removed slice User Slice of UID 0.
Nov 23 03:12:25 localhost python3[70014]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:25 localhost systemd[1]: Reloading.
Nov 23 03:12:26 localhost systemd-rc-local-generator[70043]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:26 localhost systemd-sysv-generator[70048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:26 localhost systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 23 03:12:26 localhost systemd[1]: Started ceilometer_agent_ipmi container.
Nov 23 03:12:27 localhost python3[70081]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:27 localhost systemd[1]: Reloading.
Nov 23 03:12:27 localhost systemd-sysv-generator[70113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:27 localhost systemd-rc-local-generator[70110]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:27 localhost systemd[1]: Starting logrotate_crond container...
Nov 23 03:12:27 localhost systemd[1]: Started logrotate_crond container.
Nov 23 03:12:28 localhost python3[70148]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:28 localhost systemd[1]: Reloading.
Nov 23 03:12:28 localhost systemd-rc-local-generator[70171]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:28 localhost systemd-sysv-generator[70177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:28 localhost systemd[1]: Starting nova_migration_target container...
Nov 23 03:12:28 localhost systemd[1]: Started nova_migration_target container.
Nov 23 03:12:29 localhost python3[70215]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:29 localhost systemd[1]: Reloading.
Nov 23 03:12:29 localhost systemd-rc-local-generator[70245]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:29 localhost systemd-sysv-generator[70248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:29 localhost systemd[1]: Starting ovn_controller container...
Nov 23 03:12:29 localhost tripleo-start-podman-container[70255]: Creating additional drop-in dependency for "ovn_controller" (838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd)
Nov 23 03:12:29 localhost systemd[1]: Reloading.
Nov 23 03:12:30 localhost systemd-rc-local-generator[70311]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:30 localhost systemd-sysv-generator[70316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:30 localhost systemd[1]: Started ovn_controller container.
Nov 23 03:12:30 localhost python3[70338]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:12:31 localhost systemd[1]: Reloading.
Nov 23 03:12:31 localhost systemd-rc-local-generator[70365]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:12:31 localhost systemd-sysv-generator[70368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:12:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:12:31 localhost systemd[1]: Starting ovn_metadata_agent container...
Nov 23 03:12:31 localhost systemd[1]: Started ovn_metadata_agent container.
Nov 23 03:12:31 localhost python3[70418]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:33 localhost python3[70540]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005532586 step=4 update_config_hash_only=False
Nov 23 03:12:34 localhost python3[70556]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:12:34 localhost python3[70572]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:12:38 localhost systemd[1]: tmp-crun.BFyQmE.mount: Deactivated successfully.
Nov 23 03:12:38 localhost podman[70574]: 2025-11-23 08:12:38.172320352 +0000 UTC m=+0.077257375 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public)
Nov 23 03:12:38 localhost podman[70574]: 2025-11-23 08:12:38.183500368 +0000 UTC m=+0.088437321 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:12:38 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:12:38 localhost podman[70573]: 2025-11-23 08:12:38.236211574 +0000 UTC m=+0.139353560 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com)
Nov 23 03:12:38 localhost podman[70573]: 2025-11-23 08:12:38.254150809 +0000 UTC m=+0.157292825 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4)
Nov 23 03:12:38 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:12:41 localhost podman[70613]: 2025-11-23 08:12:41.183060999 +0000 UTC m=+0.082610238 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_agent_compute, release=1761123044)
Nov 23 03:12:41 localhost systemd[1]: tmp-crun.igrJay.mount: Deactivated successfully.
Nov 23 03:12:41 localhost podman[70611]: 2025-11-23 08:12:41.233582656 +0000 UTC m=+0.136952156 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:12:41 localhost podman[70611]: 2025-11-23 08:12:41.244785503 +0000 UTC m=+0.148155003 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container)
Nov 23 03:12:41 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:12:41 localhost podman[70612]: 2025-11-23 08:12:41.291338815 +0000 UTC m=+0.192134787 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, version=17.1.12, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4)
Nov 23 03:12:41 localhost podman[70613]: 2025-11-23 08:12:41.310158504 +0000 UTC m=+0.209707733 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:12:41 localhost podman[70612]: 2025-11-23 08:12:41.320888257 +0000 UTC m=+0.221684239 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, vcs-type=git, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com)
Nov 23 03:12:41 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:12:41 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:12:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:12:42 localhost podman[70683]: 2025-11-23 08:12:42.166246894 +0000 UTC m=+0.075662203 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:12:42 localhost podman[70683]: 2025-11-23 08:12:42.604005872 +0000 UTC m=+0.513421211 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, version=17.1.12, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:12:42 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:12:45 localhost podman[70708]: 2025-11-23 08:12:45.177224797 +0000 UTC m=+0.079387992 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:12:45 localhost podman[70707]: 2025-11-23 08:12:45.152656087 +0000 UTC m=+0.061818537 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1)
Nov 23 03:12:45 localhost systemd[1]: tmp-crun.GR7F7l.mount: Deactivated successfully.
Nov 23 03:12:45 localhost podman[70706]: 2025-11-23 08:12:45.223095511 +0000 UTC m=+0.133219717 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044)
Nov 23 03:12:45 localhost podman[70708]: 2025-11-23 08:12:45.248776111 +0000 UTC m=+0.150939406 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:12:45 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:12:45 localhost podman[70706]: 2025-11-23 08:12:45.260748298 +0000 UTC m=+0.170872534 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-type=git)
Nov 23 03:12:45 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:12:45 localhost podman[70707]: 2025-11-23 08:12:45.337854079 +0000 UTC m=+0.247016559 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z)
Nov 23 03:12:45 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:12:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 03:12:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 03:12:57 localhost podman[70966]: 
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.841467568 +0000 UTC m=+0.080855821 container create 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, version=7, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 03:12:57 localhost systemd[1]: Started libpod-conmon-0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565.scope.
Nov 23 03:12:57 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.808086655 +0000 UTC m=+0.047474938 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.923488969 +0000 UTC m=+0.162877222 container init 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 03:12:57 localhost systemd[1]: tmp-crun.JtcOkb.mount: Deactivated successfully.
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.94240323 +0000 UTC m=+0.181791493 container start 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, release=553, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.942851892 +0000 UTC m=+0.182240185 container attach 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git)
Nov 23 03:12:57 localhost vigorous_franklin[70981]: 167 167
Nov 23 03:12:57 localhost systemd[1]: libpod-0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565.scope: Deactivated successfully.
Nov 23 03:12:57 localhost podman[70966]: 2025-11-23 08:12:57.947952167 +0000 UTC m=+0.187340440 container died 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git)
Nov 23 03:12:58 localhost podman[70986]: 2025-11-23 08:12:58.054057316 +0000 UTC m=+0.090971340 container remove 0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_franklin, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, ceph=True, vendor=Red Hat, Inc., name=rhceph, release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True)
Nov 23 03:12:58 localhost systemd[1]: libpod-conmon-0e22ab0bf8d4c609dc3899cc22fc64ba377cba6d2cfb20de468d1b3de3131565.scope: Deactivated successfully.
Nov 23 03:12:58 localhost podman[71008]: 
Nov 23 03:12:58 localhost podman[71008]: 2025-11-23 08:12:58.278139598 +0000 UTC m=+0.081084238 container create 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=)
Nov 23 03:12:58 localhost systemd[1]: Started libpod-conmon-79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49.scope.
Nov 23 03:12:58 localhost systemd[1]: Started libcrun container.
Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3657f367eb6812069a6459545de3b20cdcab06a1a8b9bb78300edfc6ef135582/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3657f367eb6812069a6459545de3b20cdcab06a1a8b9bb78300edfc6ef135582/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3657f367eb6812069a6459545de3b20cdcab06a1a8b9bb78300edfc6ef135582/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 03:12:58 localhost podman[71008]: 2025-11-23 08:12:58.341288609 +0000 UTC m=+0.144233219 container init 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, release=553)
Nov 23 03:12:58 localhost podman[71008]: 2025-11-23 08:12:58.246206572 +0000 UTC m=+0.049151232 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 03:12:58 localhost podman[71008]: 2025-11-23 08:12:58.350239535 +0000 UTC m=+0.153184175 container start 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=)
Nov 23 03:12:58 localhost podman[71008]: 2025-11-23 08:12:58.350601575 +0000 UTC m=+0.153546245 container attach 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph)
Nov 23 03:12:58 localhost systemd[1]: var-lib-containers-storage-overlay-201e7fff577671eed3c9575df75263099f62e294ad731501ba06544d7d115b2e-merged.mount: Deactivated successfully.
Nov 23 03:12:59 localhost boring_archimedes[71024]: [
Nov 23 03:12:59 localhost boring_archimedes[71024]:    {
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "available": false,
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "ceph_device": false,
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "lsm_data": {},
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "lvs": [],
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "path": "/dev/sr0",
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "rejected_reasons": [
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "Has a FileSystem",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "Insufficient space (<5GB)"
Nov 23 03:12:59 localhost boring_archimedes[71024]:        ],
Nov 23 03:12:59 localhost boring_archimedes[71024]:        "sys_api": {
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "actuators": null,
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "device_nodes": "sr0",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "human_readable_size": "482.00 KB",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "id_bus": "ata",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "model": "QEMU DVD-ROM",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "nr_requests": "2",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "partitions": {},
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "path": "/dev/sr0",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "removable": "1",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "rev": "2.5+",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "ro": "0",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "rotational": "1",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "sas_address": "",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "sas_device_handle": "",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "scheduler_mode": "mq-deadline",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "sectors": 0,
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "sectorsize": "2048",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "size": 493568.0,
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "support_discard": "0",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "type": "disk",
Nov 23 03:12:59 localhost boring_archimedes[71024]:            "vendor": "QEMU"
Nov 23 03:12:59 localhost boring_archimedes[71024]:        }
Nov 23 03:12:59 localhost boring_archimedes[71024]:    }
Nov 23 03:12:59 localhost boring_archimedes[71024]: ]
Nov 23 03:12:59 localhost systemd[1]: libpod-79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49.scope: Deactivated successfully.
Nov 23 03:12:59 localhost podman[71008]: 2025-11-23 08:12:59.29939769 +0000 UTC m=+1.102342290 container died 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=553, io.buildah.version=1.33.12, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main)
Nov 23 03:12:59 localhost systemd[1]: tmp-crun.dKQ1kt.mount: Deactivated successfully.
Nov 23 03:12:59 localhost systemd[1]: var-lib-containers-storage-overlay-3657f367eb6812069a6459545de3b20cdcab06a1a8b9bb78300edfc6ef135582-merged.mount: Deactivated successfully.
Nov 23 03:12:59 localhost podman[72785]: 2025-11-23 08:12:59.400072755 +0000 UTC m=+0.090744322 container remove 79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_archimedes, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 03:12:59 localhost systemd[1]: libpod-conmon-79d073a2c0c93745c2f073852060a7caa978d5b7be2a68784d067d77d1310c49.scope: Deactivated successfully.
Nov 23 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:13:09 localhost podman[72816]: 2025-11-23 08:13:09.190670989 +0000 UTC m=+0.093937578 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:13:09 localhost systemd[1]: tmp-crun.buOMxA.mount: Deactivated successfully.
Nov 23 03:13:09 localhost podman[72815]: 2025-11-23 08:13:09.238737432 +0000 UTC m=+0.145023641 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:13:09 localhost podman[72815]: 2025-11-23 08:13:09.248590763 +0000 UTC m=+0.154877022 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.expose-services=)
Nov 23 03:13:09 localhost podman[72816]: 2025-11-23 08:13:09.253760369 +0000 UTC m=+0.157026938 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:13:09 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:13:09 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:13:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:13:12 localhost podman[72856]: 2025-11-23 08:13:12.173504207 +0000 UTC m=+0.082245198 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12)
Nov 23 03:13:12 localhost podman[72856]: 2025-11-23 08:13:12.209997413 +0000 UTC m=+0.118738374 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-cron, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, tcib_managed=true)
Nov 23 03:13:12 localhost podman[72857]: 2025-11-23 08:13:12.223769248 +0000 UTC m=+0.129453269 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=)
Nov 23 03:13:12 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:13:12 localhost podman[72858]: 2025-11-23 08:13:12.283042346 +0000 UTC m=+0.185244144 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:13:12 localhost podman[72857]: 2025-11-23 08:13:12.287830893 +0000 UTC m=+0.193514914 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:13:12 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:13:12 localhost podman[72858]: 2025-11-23 08:13:12.342901521 +0000 UTC m=+0.245103369 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Nov 23 03:13:12 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:13:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:13:13 localhost podman[72928]: 2025-11-23 08:13:13.168396032 +0000 UTC m=+0.079564437 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4)
Nov 23 03:13:13 localhost podman[72928]: 2025-11-23 08:13:13.52414787 +0000 UTC m=+0.435316265 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:13:13 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:13:16 localhost podman[72951]: 2025-11-23 08:13:16.179463168 +0000 UTC m=+0.081309283 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git)
Nov 23 03:13:16 localhost podman[72951]: 2025-11-23 08:13:16.246988885 +0000 UTC m=+0.148835010 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, release=1761123044, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:13:16 localhost podman[72953]: 2025-11-23 08:13:16.246094331 +0000 UTC m=+0.143072468 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Nov 23 03:13:16 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:13:16 localhost podman[72952]: 2025-11-23 08:13:16.299923067 +0000 UTC m=+0.200127349 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:13:16 localhost podman[72952]: 2025-11-23 08:13:16.674172373 +0000 UTC m=+0.574376605 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:13:16 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:13:16 localhost podman[72953]: 2025-11-23 08:13:16.69521441 +0000 UTC m=+0.592192617 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Nov 23 03:13:16 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:13:35 localhost sshd[73027]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:13:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:13:40 localhost systemd[1]: tmp-crun.n23sjV.mount: Deactivated successfully.
Nov 23 03:13:40 localhost podman[73030]: 2025-11-23 08:13:40.234626723 +0000 UTC m=+0.137008318 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid)
Nov 23 03:13:40 localhost podman[73029]: 2025-11-23 08:13:40.196622177 +0000 UTC m=+0.104174659 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-collectd, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container)
Nov 23 03:13:40 localhost podman[73030]: 2025-11-23 08:13:40.268358706 +0000 UTC m=+0.170740301 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, distribution-scope=public)
Nov 23 03:13:40 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:13:40 localhost podman[73029]: 2025-11-23 08:13:40.281419551 +0000 UTC m=+0.188972013 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com)
Nov 23 03:13:40 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:13:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:13:43 localhost podman[73067]: 2025-11-23 08:13:43.167876248 +0000 UTC m=+0.076716812 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044)
Nov 23 03:13:43 localhost podman[73066]: 2025-11-23 08:13:43.219913445 +0000 UTC m=+0.129212141 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044)
Nov 23 03:13:43 localhost podman[73067]: 2025-11-23 08:13:43.224196829 +0000 UTC m=+0.133037403 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12)
Nov 23 03:13:43 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:13:43 localhost podman[73066]: 2025-11-23 08:13:43.2767277 +0000 UTC m=+0.186026426 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, version=17.1.12)
Nov 23 03:13:43 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:13:43 localhost podman[73068]: 2025-11-23 08:13:43.279095452 +0000 UTC m=+0.180724135 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:13:43 localhost podman[73068]: 2025-11-23 08:13:43.358122004 +0000 UTC m=+0.259750687 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public)
Nov 23 03:13:43 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:13:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:13:44 localhost podman[73139]: 2025-11-23 08:13:44.171159496 +0000 UTC m=+0.079539137 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:13:44 localhost podman[73139]: 2025-11-23 08:13:44.584993441 +0000 UTC m=+0.493373042 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4)
Nov 23 03:13:44 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:13:47 localhost podman[73163]: 2025-11-23 08:13:47.183166266 +0000 UTC m=+0.079839724 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1)
Nov 23 03:13:47 localhost systemd[1]: tmp-crun.zCtEzA.mount: Deactivated successfully.
Nov 23 03:13:47 localhost podman[73163]: 2025-11-23 08:13:47.237956627 +0000 UTC m=+0.134630085 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:13:47 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:13:47 localhost podman[73162]: 2025-11-23 08:13:47.239791475 +0000 UTC m=+0.139403501 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4)
Nov 23 03:13:47 localhost podman[73164]: 2025-11-23 08:13:47.29814165 +0000 UTC m=+0.189779555 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:13:47 localhost podman[73162]: 2025-11-23 08:13:47.322026272 +0000 UTC m=+0.221638268 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:13:47 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:13:47 localhost podman[73164]: 2025-11-23 08:13:47.529062442 +0000 UTC m=+0.420700347 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Nov 23 03:13:47 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:14:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:14:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:14:11 localhost systemd[1]: tmp-crun.DKcR08.mount: Deactivated successfully.
Nov 23 03:14:11 localhost podman[73317]: 2025-11-23 08:14:11.182905905 +0000 UTC m=+0.088486314 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, config_id=tripleo_step3, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044)
Nov 23 03:14:11 localhost podman[73317]: 2025-11-23 08:14:11.217953473 +0000 UTC m=+0.123533882 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1761123044, container_name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vcs-type=git, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:14:11 localhost podman[73316]: 2025-11-23 08:14:11.227062514 +0000 UTC m=+0.132649483 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, release=1761123044, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team)
Nov 23 03:14:11 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:14:11 localhost podman[73316]: 2025-11-23 08:14:11.240133381 +0000 UTC m=+0.145720350 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, version=17.1.12, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Nov 23 03:14:11 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:14:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:14:14 localhost systemd[1]: tmp-crun.H33pEd.mount: Deactivated successfully.
Nov 23 03:14:14 localhost podman[73356]: 2025-11-23 08:14:14.178432239 +0000 UTC m=+0.078736805 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:14:14 localhost podman[73356]: 2025-11-23 08:14:14.211866574 +0000 UTC m=+0.112171180 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 03:14:14 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:14:14 localhost podman[73354]: 2025-11-23 08:14:14.228234807 +0000 UTC m=+0.133228897 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:14:14 localhost podman[73354]: 2025-11-23 08:14:14.237323078 +0000 UTC m=+0.142317158 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:14:14 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:14:14 localhost podman[73355]: 2025-11-23 08:14:14.326264932 +0000 UTC m=+0.228797557 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc.)
Nov 23 03:14:14 localhost podman[73355]: 2025-11-23 08:14:14.353436962 +0000 UTC m=+0.255969617 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Nov 23 03:14:14 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:14:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:14:15 localhost podman[73426]: 2025-11-23 08:14:15.165682223 +0000 UTC m=+0.073961809 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:14:15 localhost podman[73426]: 2025-11-23 08:14:15.652338885 +0000 UTC m=+0.560618431 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Nov 23 03:14:15 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:14:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:14:18 localhost podman[73451]: 2025-11-23 08:14:18.169080485 +0000 UTC m=+0.074609456 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:14:18 localhost podman[73452]: 2025-11-23 08:14:18.179195523 +0000 UTC m=+0.076885696 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.12)
Nov 23 03:14:18 localhost podman[73451]: 2025-11-23 08:14:18.19271158 +0000 UTC m=+0.098240481 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:14:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:14:18 localhost podman[73450]: 2025-11-23 08:14:18.275717747 +0000 UTC m=+0.180330063 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:14:18 localhost podman[73450]: 2025-11-23 08:14:18.309978364 +0000 UTC m=+0.214590720 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:14:18 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:14:18 localhost podman[73452]: 2025-11-23 08:14:18.407684511 +0000 UTC m=+0.305374684 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:14:18 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:14:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:14:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:14:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:14:42 localhost recover_tripleo_nova_virtqemud[73536]: 61733
Nov 23 03:14:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:14:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:14:42 localhost podman[73529]: 2025-11-23 08:14:42.192907073 +0000 UTC m=+0.091643547 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, version=17.1.12, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-iscsid, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Nov 23 03:14:42 localhost podman[73529]: 2025-11-23 08:14:42.228829693 +0000 UTC m=+0.127566167 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com)
Nov 23 03:14:42 localhost systemd[1]: tmp-crun.RhAuzm.mount: Deactivated successfully.
Nov 23 03:14:42 localhost podman[73528]: 2025-11-23 08:14:42.243114042 +0000 UTC m=+0.144854015 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:14:42 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:14:42 localhost podman[73528]: 2025-11-23 08:14:42.279017821 +0000 UTC m=+0.180757794 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:14:42 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:14:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:14:45 localhost podman[73570]: 2025-11-23 08:14:45.172089223 +0000 UTC m=+0.071767311 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:14:45 localhost podman[73571]: 2025-11-23 08:14:45.245711402 +0000 UTC m=+0.142424840 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 03:14:45 localhost podman[73571]: 2025-11-23 08:14:45.273131288 +0000 UTC m=+0.169844736 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:14:45 localhost podman[73569]: 2025-11-23 08:14:45.287424416 +0000 UTC m=+0.188908151 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z)
Nov 23 03:14:45 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:14:45 localhost podman[73570]: 2025-11-23 08:14:45.310403034 +0000 UTC m=+0.210081152 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:14:45 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:14:45 localhost podman[73569]: 2025-11-23 08:14:45.324058526 +0000 UTC m=+0.225542261 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:14:45 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:14:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:14:46 localhost podman[73642]: 2025-11-23 08:14:46.171735325 +0000 UTC m=+0.079671611 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, version=17.1.12, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:14:46 localhost podman[73642]: 2025-11-23 08:14:46.57199616 +0000 UTC m=+0.479932486 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:14:46 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:14:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:14:49 localhost systemd[1]: tmp-crun.rq0tgz.mount: Deactivated successfully.
Nov 23 03:14:49 localhost podman[73666]: 2025-11-23 08:14:49.163681643 +0000 UTC m=+0.071703079 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Nov 23 03:14:49 localhost podman[73667]: 2025-11-23 08:14:49.183740454 +0000 UTC m=+0.084847877 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr)
Nov 23 03:14:49 localhost podman[73666]: 2025-11-23 08:14:49.187830013 +0000 UTC m=+0.095851459 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:14:49 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:14:49 localhost podman[73665]: 2025-11-23 08:14:49.263941427 +0000 UTC m=+0.171674426 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:14:49 localhost podman[73665]: 2025-11-23 08:14:49.328982438 +0000 UTC m=+0.236715437 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com)
Nov 23 03:14:49 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:14:49 localhost podman[73667]: 2025-11-23 08:14:49.420975354 +0000 UTC m=+0.322082777 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, distribution-scope=public)
Nov 23 03:14:49 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:14:56 localhost sshd[73739]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:15:02 localhost podman[73842]: 2025-11-23 08:15:02.867044462 +0000 UTC m=+0.091081633 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 03:15:02 localhost podman[73842]: 2025-11-23 08:15:02.992207294 +0000 UTC m=+0.216244435 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, name=rhceph)
Nov 23 03:15:04 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:15:04 localhost recover_tripleo_nova_virtqemud[73986]: 61733
Nov 23 03:15:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:15:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:15:13 localhost podman[73988]: 2025-11-23 08:15:13.203075664 +0000 UTC m=+0.097706168 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:15:13 localhost podman[73988]: 2025-11-23 08:15:13.214105145 +0000 UTC m=+0.108735669 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=iscsid)
Nov 23 03:15:13 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:15:13 localhost systemd[1]: tmp-crun.U7vIxA.mount: Deactivated successfully.
Nov 23 03:15:13 localhost podman[73987]: 2025-11-23 08:15:13.302220119 +0000 UTC m=+0.196382980 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:15:13 localhost podman[73987]: 2025-11-23 08:15:13.317366609 +0000 UTC m=+0.211529440 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, container_name=collectd, io.buildah.version=1.41.4)
Nov 23 03:15:13 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:15:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:15:15 localhost systemd[1]: tmp-crun.hec0Jb.mount: Deactivated successfully.
Nov 23 03:15:15 localhost podman[74072]: 2025-11-23 08:15:15.440463629 +0000 UTC m=+0.069369608 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, vcs-type=git, version=17.1.12, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:15:15 localhost podman[74072]: 2025-11-23 08:15:15.448725138 +0000 UTC m=+0.077631147 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:15:15 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:15:15 localhost podman[74075]: 2025-11-23 08:15:15.490328419 +0000 UTC m=+0.115429827 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Nov 23 03:15:15 localhost python3[74073]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:15 localhost podman[74074]: 2025-11-23 08:15:15.54554882 +0000 UTC m=+0.174567371 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git)
Nov 23 03:15:15 localhost podman[74074]: 2025-11-23 08:15:15.598850201 +0000 UTC m=+0.227868712 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1)
Nov 23 03:15:15 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:15:15 localhost podman[74075]: 2025-11-23 08:15:15.64944322 +0000 UTC m=+0.274544648 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:15:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:15:15 localhost python3[74186]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885715.2034616-113723-27757565274948/source _original_basename=tmp0zj7322s follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:15:16 localhost systemd[1]: tmp-crun.2n4AOU.mount: Deactivated successfully.
Nov 23 03:15:16 localhost podman[74217]: 2025-11-23 08:15:16.845036579 +0000 UTC m=+0.090390054 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target)
Nov 23 03:15:16 localhost python3[74216]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:15:17 localhost podman[74217]: 2025-11-23 08:15:17.244026381 +0000 UTC m=+0.489379866 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:15:17 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:15:18 localhost ansible-async_wrapper.py[74411]: Invoked with 62337046066 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885718.057996-113860-20857439399624/AnsiballZ_command.py _
Nov 23 03:15:18 localhost ansible-async_wrapper.py[74414]: Starting module and watcher
Nov 23 03:15:18 localhost ansible-async_wrapper.py[74414]: Start watching 74415 (3600)
Nov 23 03:15:18 localhost ansible-async_wrapper.py[74415]: Start module (74415)
Nov 23 03:15:18 localhost ansible-async_wrapper.py[74411]: Return async_wrapper task started.
Nov 23 03:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:15:19 localhost podman[74433]: 2025-11-23 08:15:19.671175909 +0000 UTC m=+0.090489836 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:15:19 localhost podman[74433]: 2025-11-23 08:15:19.7078722 +0000 UTC m=+0.127186137 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:15:19 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:15:19 localhost podman[74435]: 2025-11-23 08:15:19.717444784 +0000 UTC m=+0.135405326 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1761123044, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:15:19 localhost python3[74432]: ansible-ansible.legacy.async_status Invoked with jid=62337046066.74411 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:15:19 localhost systemd[1]: tmp-crun.EeTVPb.mount: Deactivated successfully.
Nov 23 03:15:19 localhost podman[74434]: 2025-11-23 08:15:19.771489754 +0000 UTC m=+0.192608239 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc.)
Nov 23 03:15:19 localhost podman[74434]: 2025-11-23 08:15:19.818829097 +0000 UTC m=+0.239947582 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:15:19 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:15:19 localhost podman[74435]: 2025-11-23 08:15:19.875019045 +0000 UTC m=+0.292979677 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4)
Nov 23 03:15:19 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Nov 23 03:15:23 localhost puppet-user[74465]:   (file: /etc/puppet/hiera.yaml)
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: Undefined variable '::deploy_config_name';
Nov 23 03:15:23 localhost puppet-user[74465]:   (file & line not available)
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Nov 23 03:15:23 localhost puppet-user[74465]:   (file & line not available)
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:15:23 localhost puppet-user[74465]:                    with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:15:23 localhost puppet-user[74465]:                    with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:15:23 localhost puppet-user[74465]:                    with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:15:23 localhost puppet-user[74465]:                    with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Nov 23 03:15:23 localhost puppet-user[74465]:                    with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Nov 23 03:15:23 localhost puppet-user[74465]:   (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Nov 23 03:15:23 localhost puppet-user[74465]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Nov 23 03:15:23 localhost puppet-user[74465]: Notice: Compiled catalog for np0005532586.localdomain in environment production in 0.22 seconds
Nov 23 03:15:23 localhost ansible-async_wrapper.py[74414]: 74415 still running (3600)
Nov 23 03:15:23 localhost puppet-user[74465]: Notice: Applied catalog in 0.30 seconds
Nov 23 03:15:23 localhost puppet-user[74465]: Application:
Nov 23 03:15:23 localhost puppet-user[74465]:   Initial environment: production
Nov 23 03:15:23 localhost puppet-user[74465]:   Converged environment: production
Nov 23 03:15:23 localhost puppet-user[74465]:         Run mode: user
Nov 23 03:15:23 localhost puppet-user[74465]: Changes:
Nov 23 03:15:23 localhost puppet-user[74465]: Events:
Nov 23 03:15:23 localhost puppet-user[74465]: Resources:
Nov 23 03:15:23 localhost puppet-user[74465]:            Total: 19
Nov 23 03:15:23 localhost puppet-user[74465]: Time:
Nov 23 03:15:23 localhost puppet-user[74465]:       Filebucket: 0.00
Nov 23 03:15:23 localhost puppet-user[74465]:          Package: 0.00
Nov 23 03:15:23 localhost puppet-user[74465]:         Schedule: 0.00
Nov 23 03:15:23 localhost puppet-user[74465]:             Exec: 0.01
Nov 23 03:15:23 localhost puppet-user[74465]:           Augeas: 0.01
Nov 23 03:15:23 localhost puppet-user[74465]:             File: 0.02
Nov 23 03:15:23 localhost puppet-user[74465]:          Service: 0.05
Nov 23 03:15:23 localhost puppet-user[74465]:   Config retrieval: 0.28
Nov 23 03:15:23 localhost puppet-user[74465]:   Transaction evaluation: 0.29
Nov 23 03:15:23 localhost puppet-user[74465]:   Catalog application: 0.30
Nov 23 03:15:23 localhost puppet-user[74465]:         Last run: 1763885723
Nov 23 03:15:23 localhost puppet-user[74465]:            Total: 0.31
Nov 23 03:15:23 localhost puppet-user[74465]: Version:
Nov 23 03:15:23 localhost puppet-user[74465]:           Config: 1763885723
Nov 23 03:15:23 localhost puppet-user[74465]:           Puppet: 7.10.0
Nov 23 03:15:23 localhost ansible-async_wrapper.py[74415]: Module complete (74415)
Nov 23 03:15:28 localhost ansible-async_wrapper.py[74414]: Done in kid B.
Nov 23 03:15:30 localhost python3[74647]: ansible-ansible.legacy.async_status Invoked with jid=62337046066.74411 mode=status _async_dir=/tmp/.ansible_async
Nov 23 03:15:30 localhost python3[74663]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:15:31 localhost python3[74679]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:15:31 localhost python3[74729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:31 localhost python3[74747]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpyh_mrkob recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 03:15:32 localhost python3[74777]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:33 localhost python3[74882]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Nov 23 03:15:34 localhost python3[74901]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:35 localhost python3[74933]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:15:35 localhost python3[74983]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:36 localhost python3[75001]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:36 localhost python3[75063]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:36 localhost python3[75081]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:37 localhost python3[75143]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:37 localhost python3[75161]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:38 localhost python3[75223]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:38 localhost python3[75241]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:39 localhost python3[75271]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:15:39 localhost systemd[1]: Reloading.
Nov 23 03:15:39 localhost systemd-rc-local-generator[75294]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:15:39 localhost systemd-sysv-generator[75298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:15:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:15:39 localhost python3[75357]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:40 localhost python3[75375]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:40 localhost python3[75437]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 23 03:15:41 localhost python3[75455]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:15:41 localhost python3[75485]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:15:41 localhost systemd[1]: Reloading.
Nov 23 03:15:41 localhost systemd-rc-local-generator[75508]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:15:41 localhost systemd-sysv-generator[75513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:15:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:15:42 localhost systemd[1]: Starting Create netns directory...
Nov 23 03:15:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 03:15:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 03:15:42 localhost systemd[1]: Finished Create netns directory.
Nov 23 03:15:42 localhost python3[75542]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Nov 23 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:15:44 localhost podman[75586]: 2025-11-23 08:15:44.182596003 +0000 UTC m=+0.088925625 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public)
Nov 23 03:15:44 localhost podman[75586]: 2025-11-23 08:15:44.190713518 +0000 UTC m=+0.097043110 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4)
Nov 23 03:15:44 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:15:44 localhost podman[75585]: 2025-11-23 08:15:44.284313366 +0000 UTC m=+0.191059549 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Nov 23 03:15:44 localhost podman[75585]: 2025-11-23 08:15:44.294997498 +0000 UTC m=+0.201743701 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container)
Nov 23 03:15:44 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:15:44 localhost python3[75629]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Nov 23 03:15:44 localhost podman[75677]: 2025-11-23 08:15:44.783243192 +0000 UTC m=+0.091317137 container create 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:15:44 localhost podman[75677]: 2025-11-23 08:15:44.732625553 +0000 UTC m=+0.040699548 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:15:44 localhost systemd[1]: Started libpod-conmon-6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.scope.
Nov 23 03:15:44 localhost systemd[1]: Started libcrun container.
Nov 23 03:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:15:44 localhost podman[75677]: 2025-11-23 08:15:44.893199504 +0000 UTC m=+0.201273469 container init 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:15:44 localhost podman[75677]: 2025-11-23 08:15:44.937623609 +0000 UTC m=+0.245697644 container start 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:15:44 localhost systemd-logind[761]: Existing logind session ID 28 used by new audit session, ignoring.
Nov 23 03:15:44 localhost python3[75629]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:15:44 localhost systemd[1]: Created slice User Slice of UID 0.
Nov 23 03:15:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 03:15:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 03:15:44 localhost systemd[1]: Starting User Manager for UID 0...
Nov 23 03:15:45 localhost podman[75698]: 2025-11-23 08:15:45.047292093 +0000 UTC m=+0.098513830 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:15:45 localhost podman[75698]: 2025-11-23 08:15:45.103776988 +0000 UTC m=+0.154998685 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12)
Nov 23 03:15:45 localhost podman[75698]: unhealthy
Nov 23 03:15:45 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:15:45 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:15:45 localhost systemd[75718]: Queued start job for default target Main User Target.
Nov 23 03:15:45 localhost systemd[75718]: Created slice User Application Slice.
Nov 23 03:15:45 localhost systemd[75718]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 03:15:45 localhost systemd[75718]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 03:15:45 localhost systemd[75718]: Reached target Paths.
Nov 23 03:15:45 localhost systemd[75718]: Reached target Timers.
Nov 23 03:15:45 localhost systemd[75718]: Starting D-Bus User Message Bus Socket...
Nov 23 03:15:45 localhost systemd[75718]: Starting Create User's Volatile Files and Directories...
Nov 23 03:15:45 localhost systemd[75718]: Finished Create User's Volatile Files and Directories.
Nov 23 03:15:45 localhost systemd[75718]: Listening on D-Bus User Message Bus Socket.
Nov 23 03:15:45 localhost systemd[75718]: Reached target Sockets.
Nov 23 03:15:45 localhost systemd[75718]: Reached target Basic System.
Nov 23 03:15:45 localhost systemd[75718]: Reached target Main User Target.
Nov 23 03:15:45 localhost systemd[75718]: Startup finished in 155ms.
Nov 23 03:15:45 localhost systemd[1]: Started User Manager for UID 0.
Nov 23 03:15:45 localhost systemd[1]: Started Session c10 of User root.
Nov 23 03:15:45 localhost systemd[1]: session-c10.scope: Deactivated successfully.
Nov 23 03:15:45 localhost podman[75798]: 2025-11-23 08:15:45.514366966 +0000 UTC m=+0.087589640 container create 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, io.openshift.expose-services=, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, tcib_managed=true)
Nov 23 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:15:45 localhost systemd[1]: Started libpod-conmon-917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c.scope.
Nov 23 03:15:45 localhost podman[75798]: 2025-11-23 08:15:45.466064817 +0000 UTC m=+0.039287501 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:15:45 localhost systemd[1]: Started libcrun container.
Nov 23 03:15:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5fdd560461b59dde387445faf8d301f89b69299c673ce8ac4c4cf7e6c32a08/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db5fdd560461b59dde387445faf8d301f89b69299c673ce8ac4c4cf7e6c32a08/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:15:45 localhost podman[75811]: 2025-11-23 08:15:45.635788911 +0000 UTC m=+0.085669490 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container)
Nov 23 03:15:45 localhost podman[75811]: 2025-11-23 08:15:45.644314936 +0000 UTC m=+0.094195495 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond)
Nov 23 03:15:45 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:15:45 localhost podman[75798]: 2025-11-23 08:15:45.651991629 +0000 UTC m=+0.225214303 container init 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, architecture=x86_64, release=1761123044, distribution-scope=public, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:15:45 localhost podman[75798]: 2025-11-23 08:15:45.664984583 +0000 UTC m=+0.238207227 container start 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public)
Nov 23 03:15:45 localhost podman[75798]: 2025-11-23 08:15:45.665278161 +0000 UTC m=+0.238500865 container attach 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_wait_for_compute_service, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1)
Nov 23 03:15:45 localhost podman[75830]: 2025-11-23 08:15:45.719586119 +0000 UTC m=+0.078453539 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:15:45 localhost podman[75830]: 2025-11-23 08:15:45.747941199 +0000 UTC m=+0.106808699 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z)
Nov 23 03:15:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:15:45 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:15:45 localhost podman[75864]: 2025-11-23 08:15:45.847596417 +0000 UTC m=+0.077503322 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.)
Nov 23 03:15:45 localhost podman[75864]: 2025-11-23 08:15:45.874988872 +0000 UTC m=+0.104895717 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:15:45 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:15:48 localhost systemd[1]: tmp-crun.2cC1tu.mount: Deactivated successfully.
Nov 23 03:15:48 localhost podman[75894]: 2025-11-23 08:15:48.192913029 +0000 UTC m=+0.099093913 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:15:48 localhost podman[75894]: 2025-11-23 08:15:48.593129873 +0000 UTC m=+0.499310777 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:15:48 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:15:50 localhost systemd[1]: tmp-crun.rpXzdr.mount: Deactivated successfully.
Nov 23 03:15:50 localhost podman[75918]: 2025-11-23 08:15:50.19134534 +0000 UTC m=+0.092181251 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Nov 23 03:15:50 localhost systemd[1]: tmp-crun.HBEu4y.mount: Deactivated successfully.
Nov 23 03:15:50 localhost podman[75918]: 2025-11-23 08:15:50.250199218 +0000 UTC m=+0.151035109 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:15:50 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:15:50 localhost podman[75917]: 2025-11-23 08:15:50.251725918 +0000 UTC m=+0.153938156 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent)
Nov 23 03:15:50 localhost podman[75917]: 2025-11-23 08:15:50.334797427 +0000 UTC m=+0.237009625 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:15:50 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:15:50 localhost podman[75919]: 2025-11-23 08:15:50.30432362 +0000 UTC m=+0.201071834 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.expose-services=, architecture=x86_64)
Nov 23 03:15:50 localhost podman[75919]: 2025-11-23 08:15:50.508034833 +0000 UTC m=+0.404783087 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:15:50 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:15:55 localhost systemd[1]: Stopping User Manager for UID 0...
Nov 23 03:15:55 localhost systemd[75718]: Activating special unit Exit the Session...
Nov 23 03:15:55 localhost systemd[75718]: Stopped target Main User Target.
Nov 23 03:15:55 localhost systemd[75718]: Stopped target Basic System.
Nov 23 03:15:55 localhost systemd[75718]: Stopped target Paths.
Nov 23 03:15:55 localhost systemd[75718]: Stopped target Sockets.
Nov 23 03:15:55 localhost systemd[75718]: Stopped target Timers.
Nov 23 03:15:55 localhost systemd[75718]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 03:15:55 localhost systemd[75718]: Closed D-Bus User Message Bus Socket.
Nov 23 03:15:55 localhost systemd[75718]: Stopped Create User's Volatile Files and Directories.
Nov 23 03:15:55 localhost systemd[75718]: Removed slice User Application Slice.
Nov 23 03:15:55 localhost systemd[75718]: Reached target Shutdown.
Nov 23 03:15:55 localhost systemd[75718]: Finished Exit the Session.
Nov 23 03:15:55 localhost systemd[75718]: Reached target Exit the Session.
Nov 23 03:15:55 localhost systemd[1]: user@0.service: Deactivated successfully.
Nov 23 03:15:55 localhost systemd[1]: Stopped User Manager for UID 0.
Nov 23 03:15:55 localhost systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 03:15:55 localhost systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 03:15:55 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 03:15:55 localhost systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 03:15:55 localhost systemd[1]: Removed slice User Slice of UID 0.
Nov 23 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:16:15 localhost podman[76072]: 2025-11-23 08:16:15.18359541 +0000 UTC m=+0.089211049 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:16:15 localhost podman[76073]: 2025-11-23 08:16:15.228913533 +0000 UTC m=+0.132752154 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:16:15 localhost podman[76073]: 2025-11-23 08:16:15.244378694 +0000 UTC m=+0.148217315 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_id=tripleo_step3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:16:15 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:16:15 localhost podman[76103]: 2025-11-23 08:16:15.303121983 +0000 UTC m=+0.096911433 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, version=17.1.12, config_id=tripleo_step5, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible)
Nov 23 03:16:15 localhost podman[76072]: 2025-11-23 08:16:15.330274274 +0000 UTC m=+0.235889903 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:16:15 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:16:15 localhost podman[76103]: 2025-11-23 08:16:15.36213321 +0000 UTC m=+0.155922650 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Nov 23 03:16:15 localhost podman[76103]: unhealthy
Nov 23 03:16:15 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:16:15 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:16:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:16:16 localhost podman[76130]: 2025-11-23 08:16:16.170584178 +0000 UTC m=+0.077380685 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Nov 23 03:16:16 localhost systemd[1]: tmp-crun.G0YaU2.mount: Deactivated successfully.
Nov 23 03:16:16 localhost podman[76130]: 2025-11-23 08:16:16.179784653 +0000 UTC m=+0.086581110 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Nov 23 03:16:16 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:16:16 localhost systemd[1]: tmp-crun.62BZot.mount: Deactivated successfully.
Nov 23 03:16:16 localhost podman[76131]: 2025-11-23 08:16:16.227978312 +0000 UTC m=+0.131495172 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:16:16 localhost podman[76132]: 2025-11-23 08:16:16.28218161 +0000 UTC m=+0.182187557 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:16:16 localhost podman[76131]: 2025-11-23 08:16:16.307006159 +0000 UTC m=+0.210522979 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Nov 23 03:16:16 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:16:16 localhost podman[76132]: 2025-11-23 08:16:16.337033186 +0000 UTC m=+0.237039173 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:16:16 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:16:18 localhost sshd[76202]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:16:19 localhost systemd[1]: tmp-crun.piA1U8.mount: Deactivated successfully.
Nov 23 03:16:19 localhost podman[76204]: 2025-11-23 08:16:19.182967405 +0000 UTC m=+0.088280065 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:16:19 localhost podman[76204]: 2025-11-23 08:16:19.528977669 +0000 UTC m=+0.434290349 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, config_id=tripleo_step4)
Nov 23 03:16:19 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:16:21 localhost systemd[1]: tmp-crun.pmg5Gg.mount: Deactivated successfully.
Nov 23 03:16:21 localhost podman[76227]: 2025-11-23 08:16:21.187915693 +0000 UTC m=+0.093658667 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044)
Nov 23 03:16:21 localhost podman[76229]: 2025-11-23 08:16:21.232738863 +0000 UTC m=+0.132880389 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:16:21 localhost podman[76228]: 2025-11-23 08:16:21.280585273 +0000 UTC m=+0.184350045 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 23 03:16:21 localhost podman[76228]: 2025-11-23 08:16:21.30685913 +0000 UTC m=+0.210623892 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:16:21 localhost podman[76227]: 2025-11-23 08:16:21.313205418 +0000 UTC m=+0.218948402 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Nov 23 03:16:21 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:16:21 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:16:21 localhost podman[76229]: 2025-11-23 08:16:21.428119988 +0000 UTC m=+0.328261474 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Nov 23 03:16:21 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:16:46 localhost podman[76301]: 2025-11-23 08:16:46.184862517 +0000 UTC m=+0.090243448 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, container_name=collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:16:46 localhost podman[76301]: 2025-11-23 08:16:46.221862129 +0000 UTC m=+0.127243030 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, build-date=2025-11-18T22:51:28Z, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, container_name=collectd, batch=17.1_20251118.1, distribution-scope=public)
Nov 23 03:16:46 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:16:46 localhost podman[76300]: 2025-11-23 08:16:46.231639538 +0000 UTC m=+0.138121707 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public)
Nov 23 03:16:46 localhost podman[76302]: 2025-11-23 08:16:46.293679584 +0000 UTC m=+0.194419291 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:16:46 localhost podman[76302]: 2025-11-23 08:16:46.301665396 +0000 UTC m=+0.202405113 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, distribution-scope=public, vcs-type=git)
Nov 23 03:16:46 localhost podman[76300]: 2025-11-23 08:16:46.311131768 +0000 UTC m=+0.217613957 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., config_id=tripleo_step5, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Nov 23 03:16:46 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:16:46 localhost podman[76300]: unhealthy
Nov 23 03:16:46 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:16:46 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:16:46 localhost podman[76371]: 2025-11-23 08:16:46.43135783 +0000 UTC m=+0.077262983 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:16:46 localhost podman[76371]: 2025-11-23 08:16:46.478314865 +0000 UTC m=+0.124220118 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:16:46 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:16:46 localhost podman[76388]: 2025-11-23 08:16:46.492201504 +0000 UTC m=+0.084014221 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true)
Nov 23 03:16:46 localhost podman[76342]: 2025-11-23 08:16:46.404733842 +0000 UTC m=+0.178834178 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:16:46 localhost podman[76388]: 2025-11-23 08:16:46.517604428 +0000 UTC m=+0.109417205 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:16:46 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:16:46 localhost podman[76342]: 2025-11-23 08:16:46.540915057 +0000 UTC m=+0.315015443 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64)
Nov 23 03:16:46 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:16:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:16:50 localhost podman[76432]: 2025-11-23 08:16:50.169572052 +0000 UTC m=+0.076405539 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, distribution-scope=public)
Nov 23 03:16:50 localhost podman[76432]: 2025-11-23 08:16:50.565122051 +0000 UTC m=+0.471955468 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:16:50 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:16:52 localhost systemd[1]: tmp-crun.RWrK1f.mount: Deactivated successfully.
Nov 23 03:16:52 localhost podman[76457]: 2025-11-23 08:16:52.191470149 +0000 UTC m=+0.093296928 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:16:52 localhost systemd[1]: tmp-crun.GAE75D.mount: Deactivated successfully.
Nov 23 03:16:52 localhost podman[76456]: 2025-11-23 08:16:52.240412088 +0000 UTC m=+0.144475425 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:16:52 localhost podman[76456]: 2025-11-23 08:16:52.288983628 +0000 UTC m=+0.193047015 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 03:16:52 localhost podman[76458]: 2025-11-23 08:16:52.299709802 +0000 UTC m=+0.197594586 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, build-date=2025-11-18T22:49:46Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:16:52 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:16:52 localhost podman[76457]: 2025-11-23 08:16:52.342801986 +0000 UTC m=+0.244628755 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Nov 23 03:16:52 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:16:52 localhost podman[76458]: 2025-11-23 08:16:52.489976752 +0000 UTC m=+0.387861486 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible)
Nov 23 03:16:52 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:17:00 localhost sshd[76531]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:17:00 localhost systemd[1]: session-27.scope: Deactivated successfully.
Nov 23 03:17:00 localhost systemd[1]: session-27.scope: Consumed 3.062s CPU time.
Nov 23 03:17:00 localhost systemd-logind[761]: Session 27 logged out. Waiting for processes to exit.
Nov 23 03:17:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:17:00 localhost systemd-logind[761]: Removed session 27.
Nov 23 03:17:00 localhost recover_tripleo_nova_virtqemud[76533]: 61733
Nov 23 03:17:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:17:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:17:17 localhost podman[76627]: 2025-11-23 08:17:17.213445099 +0000 UTC m=+0.089472825 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:17:17 localhost podman[76627]: 2025-11-23 08:17:17.242630284 +0000 UTC m=+0.118657980 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 03:17:17 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:17:17 localhost systemd[1]: tmp-crun.zciZH7.mount: Deactivated successfully.
Nov 23 03:17:17 localhost podman[76613]: 2025-11-23 08:17:17.30389461 +0000 UTC m=+0.192820949 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:17:17 localhost podman[76612]: 2025-11-23 08:17:17.367202391 +0000 UTC m=+0.256384387 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z)
Nov 23 03:17:17 localhost podman[76614]: 2025-11-23 08:17:17.280698645 +0000 UTC m=+0.165809392 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Nov 23 03:17:17 localhost podman[76613]: 2025-11-23 08:17:17.39504543 +0000 UTC m=+0.283971799 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, vcs-type=git, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:17:17 localhost podman[76613]: unhealthy
Nov 23 03:17:17 localhost podman[76612]: 2025-11-23 08:17:17.404411418 +0000 UTC m=+0.293593414 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:17:17 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:17:17 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:17:17 localhost podman[76614]: 2025-11-23 08:17:17.413966871 +0000 UTC m=+0.299077648 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Nov 23 03:17:17 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:17:17 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:17:17 localhost podman[76619]: 2025-11-23 08:17:17.466086505 +0000 UTC m=+0.341831664 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid)
Nov 23 03:17:17 localhost podman[76619]: 2025-11-23 08:17:17.516980436 +0000 UTC m=+0.392725565 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Nov 23 03:17:17 localhost podman[76615]: 2025-11-23 08:17:17.529291482 +0000 UTC m=+0.411512243 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, container_name=collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-collectd-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z)
Nov 23 03:17:17 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:17:17 localhost podman[76615]: 2025-11-23 08:17:17.567012434 +0000 UTC m=+0.449233195 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:17:17 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:17:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:17:21 localhost podman[76743]: 2025-11-23 08:17:21.216843913 +0000 UTC m=+0.081819913 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:17:21 localhost podman[76743]: 2025-11-23 08:17:21.58895922 +0000 UTC m=+0.453935240 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:17:21 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:17:23 localhost podman[76768]: 2025-11-23 08:17:23.191690771 +0000 UTC m=+0.085486231 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd)
Nov 23 03:17:23 localhost podman[76766]: 2025-11-23 08:17:23.241479253 +0000 UTC m=+0.140082640 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4)
Nov 23 03:17:23 localhost podman[76766]: 2025-11-23 08:17:23.295814294 +0000 UTC m=+0.194417701 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, distribution-scope=public, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:17:23 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:17:23 localhost podman[76767]: 2025-11-23 08:17:23.298298251 +0000 UTC m=+0.193209540 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:17:23 localhost podman[76768]: 2025-11-23 08:17:23.371694278 +0000 UTC m=+0.265489778 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, version=17.1.12)
Nov 23 03:17:23 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:17:23 localhost podman[76767]: 2025-11-23 08:17:23.428004843 +0000 UTC m=+0.322916112 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:17:23 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:17:24 localhost systemd[1]: tmp-crun.aAP4O7.mount: Deactivated successfully.
Nov 23 03:17:39 localhost sshd[76845]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:17:48 localhost podman[76847]: 2025-11-23 08:17:48.209380567 +0000 UTC m=+0.106031675 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:49:32Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:17:48 localhost podman[76847]: 2025-11-23 08:17:48.245288601 +0000 UTC m=+0.141939679 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4)
Nov 23 03:17:48 localhost podman[76848]: 2025-11-23 08:17:48.255123012 +0000 UTC m=+0.148845373 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12)
Nov 23 03:17:48 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:17:48 localhost podman[76848]: 2025-11-23 08:17:48.297404904 +0000 UTC m=+0.191127265 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5)
Nov 23 03:17:48 localhost podman[76848]: unhealthy
Nov 23 03:17:48 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:17:48 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:17:48 localhost podman[76849]: 2025-11-23 08:17:48.319020047 +0000 UTC m=+0.209181222 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true)
Nov 23 03:17:48 localhost podman[76849]: 2025-11-23 08:17:48.349326692 +0000 UTC m=+0.239487867 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:17:48 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:17:48 localhost podman[76863]: 2025-11-23 08:17:48.364767662 +0000 UTC m=+0.245678842 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:17:48 localhost podman[76863]: 2025-11-23 08:17:48.422568176 +0000 UTC m=+0.303479346 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:17:48 localhost podman[76854]: 2025-11-23 08:17:48.429617083 +0000 UTC m=+0.310505823 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=iscsid, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:17:48 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:17:48 localhost podman[76850]: 2025-11-23 08:17:48.475693806 +0000 UTC m=+0.360696404 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team)
Nov 23 03:17:48 localhost podman[76850]: 2025-11-23 08:17:48.488019064 +0000 UTC m=+0.373021602 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, container_name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:17:48 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:17:48 localhost podman[76854]: 2025-11-23 08:17:48.543425614 +0000 UTC m=+0.424314364 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044)
Nov 23 03:17:48 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:17:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:17:52 localhost podman[76971]: 2025-11-23 08:17:52.176679341 +0000 UTC m=+0.080923428 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:17:52 localhost podman[76971]: 2025-11-23 08:17:52.562039 +0000 UTC m=+0.466283067 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:17:52 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:17:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:17:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:17:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:17:54 localhost podman[76996]: 2025-11-23 08:17:54.179102062 +0000 UTC m=+0.079487170 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:46Z)
Nov 23 03:17:54 localhost podman[76994]: 2025-11-23 08:17:54.232990092 +0000 UTC m=+0.138952819 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:17:54 localhost podman[76994]: 2025-11-23 08:17:54.282159598 +0000 UTC m=+0.188122315 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64)
Nov 23 03:17:54 localhost podman[76995]: 2025-11-23 08:17:54.29130556 +0000 UTC m=+0.194268297 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:17:54 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:17:54 localhost podman[76995]: 2025-11-23 08:17:54.321904342 +0000 UTC m=+0.224867089 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 03:17:54 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:17:54 localhost podman[76996]: 2025-11-23 08:17:54.388051789 +0000 UTC m=+0.288436967 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public)
Nov 23 03:17:54 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:18:04 localhost sshd[77070]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:18:19 localhost systemd[1]: tmp-crun.Xh7fOJ.mount: Deactivated successfully.
Nov 23 03:18:19 localhost podman[77150]: 2025-11-23 08:18:19.202260172 +0000 UTC m=+0.091614093 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Nov 23 03:18:19 localhost podman[77148]: 2025-11-23 08:18:19.252206998 +0000 UTC m=+0.149070729 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:18:19 localhost podman[77150]: 2025-11-23 08:18:19.250265346 +0000 UTC m=+0.139619177 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:18:19 localhost podman[77151]: 2025-11-23 08:18:19.213465369 +0000 UTC m=+0.101772943 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, tcib_managed=true, container_name=iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1)
Nov 23 03:18:19 localhost podman[77148]: 2025-11-23 08:18:19.287385051 +0000 UTC m=+0.184248692 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, version=17.1.12)
Nov 23 03:18:19 localhost podman[77148]: unhealthy
Nov 23 03:18:19 localhost podman[77151]: 2025-11-23 08:18:19.295907467 +0000 UTC m=+0.184215041 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12)
Nov 23 03:18:19 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:18:19 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:18:19 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:18:19 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:18:19 localhost podman[77162]: 2025-11-23 08:18:19.368926065 +0000 UTC m=+0.251986069 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1)
Nov 23 03:18:19 localhost podman[77162]: 2025-11-23 08:18:19.404072999 +0000 UTC m=+0.287133013 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:18:19 localhost podman[77147]: 2025-11-23 08:18:19.412955484 +0000 UTC m=+0.309684811 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, container_name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, tcib_managed=true, version=17.1.12, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git)
Nov 23 03:18:19 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:18:19 localhost podman[77147]: 2025-11-23 08:18:19.425935869 +0000 UTC m=+0.322665236 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64)
Nov 23 03:18:19 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:18:19 localhost podman[77149]: 2025-11-23 08:18:19.471393325 +0000 UTC m=+0.365539004 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com)
Nov 23 03:18:19 localhost podman[77149]: 2025-11-23 08:18:19.529983351 +0000 UTC m=+0.424129040 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:18:19 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:18:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:18:23 localhost podman[77276]: 2025-11-23 08:18:23.172840943 +0000 UTC m=+0.079077350 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:18:23 localhost podman[77276]: 2025-11-23 08:18:23.546331627 +0000 UTC m=+0.452568094 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.)
Nov 23 03:18:23 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:18:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:18:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:18:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:18:25 localhost podman[77299]: 2025-11-23 08:18:25.169431699 +0000 UTC m=+0.077358785 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com)
Nov 23 03:18:25 localhost podman[77301]: 2025-11-23 08:18:25.190045506 +0000 UTC m=+0.089984779 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:18:25 localhost podman[77300]: 2025-11-23 08:18:25.227413677 +0000 UTC m=+0.129275592 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044)
Nov 23 03:18:25 localhost podman[77299]: 2025-11-23 08:18:25.237477435 +0000 UTC m=+0.145404511 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Nov 23 03:18:25 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:18:25 localhost podman[77300]: 2025-11-23 08:18:25.255967796 +0000 UTC m=+0.157829791 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, version=17.1.12, container_name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Nov 23 03:18:25 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:18:25 localhost podman[77301]: 2025-11-23 08:18:25.433008115 +0000 UTC m=+0.332947388 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, distribution-scope=public)
Nov 23 03:18:25 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:18:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:18:34 localhost recover_tripleo_nova_virtqemud[77377]: 61733
Nov 23 03:18:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:18:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:18:50 localhost systemd[1]: tmp-crun.4uu8BG.mount: Deactivated successfully.
Nov 23 03:18:50 localhost podman[77449]: 2025-11-23 08:18:50.206236942 +0000 UTC m=+0.109234160 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:18:50 localhost systemd[1]: tmp-crun.8JJ2G0.mount: Deactivated successfully.
Nov 23 03:18:50 localhost podman[77450]: 2025-11-23 08:18:50.228518314 +0000 UTC m=+0.128722628 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:18:50 localhost podman[77449]: 2025-11-23 08:18:50.252892461 +0000 UTC m=+0.155889759 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond)
Nov 23 03:18:50 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:18:50 localhost podman[77464]: 2025-11-23 08:18:50.313032697 +0000 UTC m=+0.199814565 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:18:50 localhost podman[77464]: 2025-11-23 08:18:50.347836931 +0000 UTC m=+0.234618869 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:18:50 localhost podman[77451]: 2025-11-23 08:18:50.357811985 +0000 UTC m=+0.256039546 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Nov 23 03:18:50 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:18:50 localhost podman[77450]: 2025-11-23 08:18:50.374237892 +0000 UTC m=+0.274442236 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, release=1761123044, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Nov 23 03:18:50 localhost podman[77452]: 2025-11-23 08:18:50.408236864 +0000 UTC m=+0.300070746 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Nov 23 03:18:50 localhost podman[77451]: 2025-11-23 08:18:50.421953838 +0000 UTC m=+0.320181349 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:18:50 localhost podman[77452]: 2025-11-23 08:18:50.42239946 +0000 UTC m=+0.314233322 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-collectd, release=1761123044, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Nov 23 03:18:50 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:18:50 localhost podman[77458]: 2025-11-23 08:18:50.467496667 +0000 UTC m=+0.355569449 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Nov 23 03:18:50 localhost podman[77458]: 2025-11-23 08:18:50.48193777 +0000 UTC m=+0.370010602 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64)
Nov 23 03:18:50 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:18:50 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:18:50 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:18:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:18:54 localhost systemd[1]: tmp-crun.4U7SI1.mount: Deactivated successfully.
Nov 23 03:18:54 localhost podman[77609]: 2025-11-23 08:18:54.196367892 +0000 UTC m=+0.097311774 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.)
Nov 23 03:18:54 localhost podman[77609]: 2025-11-23 08:18:54.556039629 +0000 UTC m=+0.456983471 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:18:54 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:18:56 localhost podman[77635]: 2025-11-23 08:18:56.190585295 +0000 UTC m=+0.091480169 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 23 03:18:56 localhost podman[77633]: 2025-11-23 08:18:56.22502993 +0000 UTC m=+0.132909750 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true)
Nov 23 03:18:56 localhost podman[77634]: 2025-11-23 08:18:56.298968722 +0000 UTC m=+0.202041534 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64)
Nov 23 03:18:56 localhost podman[77633]: 2025-11-23 08:18:56.310043746 +0000 UTC m=+0.217923526 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:18:56 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:18:56 localhost podman[77634]: 2025-11-23 08:18:56.325902177 +0000 UTC m=+0.228975039 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:18:56 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:18:56 localhost podman[77635]: 2025-11-23 08:18:56.42170862 +0000 UTC m=+0.322603484 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd)
Nov 23 03:18:56 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:18:57 localhost systemd[1]: tmp-crun.vPQELL.mount: Deactivated successfully.
Nov 23 03:18:58 localhost systemd[1]: libpod-917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c.scope: Deactivated successfully.
Nov 23 03:18:58 localhost podman[77709]: 2025-11-23 08:18:58.351176484 +0000 UTC m=+0.059786288 container died 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, container_name=nova_wait_for_compute_service, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Nov 23 03:18:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c-userdata-shm.mount: Deactivated successfully.
Nov 23 03:18:58 localhost systemd[1]: var-lib-containers-storage-overlay-db5fdd560461b59dde387445faf8d301f89b69299c673ce8ac4c4cf7e6c32a08-merged.mount: Deactivated successfully.
Nov 23 03:18:58 localhost podman[77709]: 2025-11-23 08:18:58.384075567 +0000 UTC m=+0.092685331 container cleanup 917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_wait_for_compute_service, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Nov 23 03:18:58 localhost systemd[1]: libpod-conmon-917d36a82a11a0a2f024de95b1a812e52ea1245fbd4d9d99cbf327ba8ef2c24c.scope: Deactivated successfully.
Nov 23 03:18:58 localhost python3[75629]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=54a97af4633bfad00758ecf55e783ce2 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Nov 23 03:18:59 localhost python3[77765]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:18:59 localhost python3[77781]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 23 03:18:59 localhost python3[77842]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1763885939.407348-118621-194873736599186/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:19:00 localhost python3[77858]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 03:19:00 localhost systemd[1]: Reloading.
Nov 23 03:19:00 localhost systemd-sysv-generator[77888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:19:00 localhost systemd-rc-local-generator[77880]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:19:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:19:01 localhost sshd[77911]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:19:01 localhost python3[77910]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:19:02 localhost systemd[1]: Reloading.
Nov 23 03:19:02 localhost systemd-sysv-generator[77943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:19:02 localhost systemd-rc-local-generator[77938]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:19:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:19:02 localhost systemd[1]: Starting nova_compute container...
Nov 23 03:19:03 localhost tripleo-start-podman-container[77952]: Creating additional drop-in dependency for "nova_compute" (6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199)
Nov 23 03:19:03 localhost systemd[1]: Reloading.
Nov 23 03:19:03 localhost systemd-rc-local-generator[78006]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:19:03 localhost systemd-sysv-generator[78011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:19:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:19:03 localhost systemd[1]: Started nova_compute container.
Nov 23 03:19:03 localhost python3[78049]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:19:05 localhost python3[78170]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005532586 step=5 update_config_hash_only=False
Nov 23 03:19:06 localhost python3[78186]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 03:19:06 localhost python3[78202]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:19:21 localhost podman[78283]: 2025-11-23 08:19:21.174651638 +0000 UTC m=+0.067963496 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container)
Nov 23 03:19:21 localhost podman[78282]: 2025-11-23 08:19:21.249824853 +0000 UTC m=+0.143933652 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container)
Nov 23 03:19:21 localhost podman[78282]: 2025-11-23 08:19:21.25649628 +0000 UTC m=+0.150605029 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Nov 23 03:19:21 localhost podman[78284]: 2025-11-23 08:19:21.212909363 +0000 UTC m=+0.096954674 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:19:21 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:19:21 localhost podman[78284]: 2025-11-23 08:19:21.297868858 +0000 UTC m=+0.181914149 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Nov 23 03:19:21 localhost podman[78283]: 2025-11-23 08:19:21.307039281 +0000 UTC m=+0.200351159 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:19:21 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:19:21 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:19:21 localhost podman[78281]: 2025-11-23 08:19:21.344627999 +0000 UTC m=+0.239065886 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git)
Nov 23 03:19:21 localhost podman[78280]: 2025-11-23 08:19:21.257619109 +0000 UTC m=+0.151555623 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute)
Nov 23 03:19:21 localhost podman[78281]: 2025-11-23 08:19:21.401073328 +0000 UTC m=+0.295511235 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Nov 23 03:19:21 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:19:21 localhost podman[78279]: 2025-11-23 08:19:21.417112043 +0000 UTC m=+0.311603272 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:19:21 localhost podman[78280]: 2025-11-23 08:19:21.437875194 +0000 UTC m=+0.331811698 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:19:21 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:19:21 localhost podman[78279]: 2025-11-23 08:19:21.453068877 +0000 UTC m=+0.347560106 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4)
Nov 23 03:19:21 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:19:22 localhost systemd[1]: tmp-crun.nyeeVE.mount: Deactivated successfully.
Nov 23 03:19:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:19:25 localhost podman[78417]: 2025-11-23 08:19:25.16567309 +0000 UTC m=+0.075271798 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Nov 23 03:19:25 localhost podman[78417]: 2025-11-23 08:19:25.565154234 +0000 UTC m=+0.474752972 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20251118.1)
Nov 23 03:19:25 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:19:27 localhost systemd[1]: tmp-crun.qydF98.mount: Deactivated successfully.
Nov 23 03:19:27 localhost podman[78442]: 2025-11-23 08:19:27.190230599 +0000 UTC m=+0.087091423 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:19:27 localhost podman[78441]: 2025-11-23 08:19:27.243562204 +0000 UTC m=+0.144568067 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller)
Nov 23 03:19:27 localhost podman[78440]: 2025-11-23 08:19:27.29202226 +0000 UTC m=+0.194499613 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, release=1761123044, version=17.1.12, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4)
Nov 23 03:19:27 localhost podman[78441]: 2025-11-23 08:19:27.299109409 +0000 UTC m=+0.200115352 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:19:27 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:19:27 localhost podman[78440]: 2025-11-23 08:19:27.341878134 +0000 UTC m=+0.244355527 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Nov 23 03:19:27 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:19:27 localhost podman[78442]: 2025-11-23 08:19:27.383371575 +0000 UTC m=+0.280232319 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:19:27 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:19:28 localhost systemd[1]: tmp-crun.0jseVp.mount: Deactivated successfully.
Nov 23 03:19:36 localhost sshd[78515]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:19:36 localhost systemd-logind[761]: New session 33 of user zuul.
Nov 23 03:19:36 localhost systemd[1]: Started Session 33 of User zuul.
Nov 23 03:19:37 localhost python3[78624]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 03:19:44 localhost python3[78887]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Nov 23 03:19:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:19:48 localhost recover_tripleo_nova_virtqemud[79005]: 61733
Nov 23 03:19:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:19:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:19:48 localhost python3[79003]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None
Nov 23 03:19:48 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 23 03:19:48 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation.
Nov 23 03:19:48 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 03:19:48 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 03:19:48 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:19:52 localhost systemd[1]: tmp-crun.nIUzcu.mount: Deactivated successfully.
Nov 23 03:19:52 localhost podman[79053]: 2025-11-23 08:19:52.208477532 +0000 UTC m=+0.103396376 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Nov 23 03:19:52 localhost podman[79053]: 2025-11-23 08:19:52.260013549 +0000 UTC m=+0.154932393 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:19:52 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:19:52 localhost podman[79052]: 2025-11-23 08:19:52.262642369 +0000 UTC m=+0.157428589 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:19:52 localhost podman[79070]: 2025-11-23 08:19:52.318183153 +0000 UTC m=+0.200453202 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 03:19:52 localhost podman[79055]: 2025-11-23 08:19:52.367686408 +0000 UTC m=+0.251812015 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container)
Nov 23 03:19:52 localhost podman[79070]: 2025-11-23 08:19:52.375011722 +0000 UTC m=+0.257281781 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:19:52 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:19:52 localhost podman[79054]: 2025-11-23 08:19:52.425876572 +0000 UTC m=+0.317246582 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team)
Nov 23 03:19:52 localhost podman[79055]: 2025-11-23 08:19:52.451873162 +0000 UTC m=+0.335998789 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:19:52 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:19:52 localhost podman[79051]: 2025-11-23 08:19:52.475068618 +0000 UTC m=+0.371639876 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4)
Nov 23 03:19:52 localhost podman[79051]: 2025-11-23 08:19:52.487295892 +0000 UTC m=+0.383867140 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, container_name=logrotate_crond, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Nov 23 03:19:52 localhost podman[79052]: 2025-11-23 08:19:52.496310992 +0000 UTC m=+0.391097272 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:19:52 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:19:52 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:19:52 localhost podman[79054]: 2025-11-23 08:19:52.53881604 +0000 UTC m=+0.430186060 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Nov 23 03:19:52 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:19:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:19:56 localhost podman[79183]: 2025-11-23 08:19:56.183246585 +0000 UTC m=+0.089412614 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:19:56 localhost podman[79183]: 2025-11-23 08:19:56.552018313 +0000 UTC m=+0.458184332 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_migration_target)
Nov 23 03:19:56 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:19:58 localhost systemd[1]: tmp-crun.8QhJD5.mount: Deactivated successfully.
Nov 23 03:19:58 localhost podman[79206]: 2025-11-23 08:19:58.180789766 +0000 UTC m=+0.093057542 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Nov 23 03:19:58 localhost podman[79208]: 2025-11-23 08:19:58.196775759 +0000 UTC m=+0.097069367 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:19:58 localhost podman[79206]: 2025-11-23 08:19:58.226985601 +0000 UTC m=+0.139253387 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1)
Nov 23 03:19:58 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:19:58 localhost podman[79207]: 2025-11-23 08:19:58.284046357 +0000 UTC m=+0.189319057 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 23 03:19:58 localhost podman[79207]: 2025-11-23 08:19:58.33694632 +0000 UTC m=+0.242219020 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:19:58 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:19:58 localhost podman[79208]: 2025-11-23 08:19:58.412126815 +0000 UTC m=+0.312420413 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:19:58 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:20:21 localhost sshd[79360]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:20:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:20:23 localhost podman[79375]: 2025-11-23 08:20:23.290707407 +0000 UTC m=+0.182751908 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64)
Nov 23 03:20:23 localhost podman[79364]: 2025-11-23 08:20:23.208440535 +0000 UTC m=+0.104082421 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:20:23 localhost podman[79375]: 2025-11-23 08:20:23.325687055 +0000 UTC m=+0.217731626 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, version=17.1.12, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Nov 23 03:20:23 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:20:23 localhost podman[79364]: 2025-11-23 08:20:23.338216537 +0000 UTC m=+0.233858423 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:20:23 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:20:23 localhost podman[79379]: 2025-11-23 08:20:23.260899416 +0000 UTC m=+0.146636110 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Nov 23 03:20:23 localhost podman[79363]: 2025-11-23 08:20:23.187103809 +0000 UTC m=+0.088538419 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Nov 23 03:20:23 localhost podman[79365]: 2025-11-23 08:20:23.398271949 +0000 UTC m=+0.291939463 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Nov 23 03:20:23 localhost podman[79363]: 2025-11-23 08:20:23.419984025 +0000 UTC m=+0.321418655 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:20:23 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:20:23 localhost podman[79365]: 2025-11-23 08:20:23.433428392 +0000 UTC m=+0.327095896 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, batch=17.1_20251118.1)
Nov 23 03:20:23 localhost podman[79379]: 2025-11-23 08:20:23.444617508 +0000 UTC m=+0.330354222 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 03:20:23 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:20:23 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:20:23 localhost podman[79362]: 2025-11-23 08:20:23.24179919 +0000 UTC m=+0.143074325 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=logrotate_crond, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container)
Nov 23 03:20:23 localhost podman[79362]: 2025-11-23 08:20:23.523966802 +0000 UTC m=+0.425241927 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:20:23 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:20:24 localhost systemd[1]: tmp-crun.SAYN44.mount: Deactivated successfully.
Nov 23 03:20:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:20:27 localhost podman[79500]: 2025-11-23 08:20:27.177959429 +0000 UTC m=+0.084022840 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:20:27 localhost podman[79500]: 2025-11-23 08:20:27.558049748 +0000 UTC m=+0.464113199 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Nov 23 03:20:27 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:20:29 localhost systemd[1]: tmp-crun.mjD2E9.mount: Deactivated successfully.
Nov 23 03:20:29 localhost podman[79525]: 2025-11-23 08:20:29.194902448 +0000 UTC m=+0.094067706 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Nov 23 03:20:29 localhost systemd[1]: tmp-crun.iUQvVW.mount: Deactivated successfully.
Nov 23 03:20:29 localhost podman[79523]: 2025-11-23 08:20:29.266005353 +0000 UTC m=+0.169812135 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=)
Nov 23 03:20:29 localhost podman[79524]: 2025-11-23 08:20:29.302354887 +0000 UTC m=+0.203839817 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:20:29 localhost podman[79524]: 2025-11-23 08:20:29.327965426 +0000 UTC m=+0.229450316 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:20:29 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:20:29 localhost podman[79523]: 2025-11-23 08:20:29.354608083 +0000 UTC m=+0.258414865 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4)
Nov 23 03:20:29 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:20:29 localhost podman[79525]: 2025-11-23 08:20:29.441838886 +0000 UTC m=+0.341004144 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:20:29 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:20:48 localhost systemd[1]: session-33.scope: Deactivated successfully.
Nov 23 03:20:48 localhost systemd[1]: session-33.scope: Consumed 5.640s CPU time.
Nov 23 03:20:48 localhost systemd-logind[761]: Session 33 logged out. Waiting for processes to exit.
Nov 23 03:20:48 localhost systemd-logind[761]: Removed session 33.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:20:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:20:54 localhost systemd[1]: tmp-crun.5WuKZD.mount: Deactivated successfully.
Nov 23 03:20:54 localhost podman[79643]: 2025-11-23 08:20:54.202520627 +0000 UTC m=+0.102873320 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:20:54 localhost podman[79646]: 2025-11-23 08:20:54.246471052 +0000 UTC m=+0.136179612 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:20:54 localhost podman[79646]: 2025-11-23 08:20:54.259870217 +0000 UTC m=+0.149578807 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Nov 23 03:20:54 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:20:54 localhost podman[79658]: 2025-11-23 08:20:54.272556084 +0000 UTC m=+0.154811727 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:20:54 localhost podman[79652]: 2025-11-23 08:20:54.308792815 +0000 UTC m=+0.196524503 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public)
Nov 23 03:20:54 localhost podman[79652]: 2025-11-23 08:20:54.317824245 +0000 UTC m=+0.205555943 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, architecture=x86_64, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z)
Nov 23 03:20:54 localhost podman[79658]: 2025-11-23 08:20:54.326420892 +0000 UTC m=+0.208676555 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:20:54 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:20:54 localhost podman[79643]: 2025-11-23 08:20:54.333726996 +0000 UTC m=+0.234079779 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Nov 23 03:20:54 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:20:54 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:20:54 localhost podman[79644]: 2025-11-23 08:20:54.408510329 +0000 UTC m=+0.305056301 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, tcib_managed=true)
Nov 23 03:20:54 localhost podman[79645]: 2025-11-23 08:20:54.465057709 +0000 UTC m=+0.359495235 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z)
Nov 23 03:20:54 localhost podman[79644]: 2025-11-23 08:20:54.485352298 +0000 UTC m=+0.381898240 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible)
Nov 23 03:20:54 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:20:54 localhost podman[79645]: 2025-11-23 08:20:54.519970435 +0000 UTC m=+0.414407921 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:20:54 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:20:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:20:58 localhost systemd[1]: tmp-crun.bG5hmJ.mount: Deactivated successfully.
Nov 23 03:20:58 localhost podman[79777]: 2025-11-23 08:20:58.2113079 +0000 UTC m=+0.118435913 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:20:58 localhost podman[79777]: 2025-11-23 08:20:58.605107863 +0000 UTC m=+0.512235896 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=)
Nov 23 03:20:58 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:21:00 localhost systemd[1]: tmp-crun.MHKNaG.mount: Deactivated successfully.
Nov 23 03:21:00 localhost podman[79801]: 2025-11-23 08:21:00.185247049 +0000 UTC m=+0.089816543 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc.)
Nov 23 03:21:00 localhost podman[79802]: 2025-11-23 08:21:00.231765083 +0000 UTC m=+0.133197854 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Nov 23 03:21:00 localhost podman[79802]: 2025-11-23 08:21:00.286369181 +0000 UTC m=+0.187802282 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:21:00 localhost podman[79803]: 2025-11-23 08:21:00.286042802 +0000 UTC m=+0.185366817 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, version=17.1.12, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1)
Nov 23 03:21:00 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:21:00 localhost podman[79801]: 2025-11-23 08:21:00.312184866 +0000 UTC m=+0.216754320 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_id=tripleo_step4)
Nov 23 03:21:00 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:21:00 localhost podman[79803]: 2025-11-23 08:21:00.477591153 +0000 UTC m=+0.376915158 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:21:00 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:21:01 localhost sshd[79880]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:21:01 localhost systemd-logind[761]: New session 34 of user zuul.
Nov 23 03:21:01 localhost systemd[1]: Started Session 34 of User zuul.
Nov 23 03:21:01 localhost python3[79899]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 03:21:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:21:24 localhost recover_tripleo_nova_virtqemud[79979]: 61733
Nov 23 03:21:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:21:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:21:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:21:25 localhost podman[79981]: 2025-11-23 08:21:25.200270553 +0000 UTC m=+0.103475826 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Nov 23 03:21:25 localhost podman[79983]: 2025-11-23 08:21:25.263610332 +0000 UTC m=+0.159721187 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container)
Nov 23 03:21:25 localhost podman[79982]: 2025-11-23 08:21:25.304925568 +0000 UTC m=+0.205697126 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, version=17.1.12, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Nov 23 03:21:25 localhost podman[79983]: 2025-11-23 08:21:25.328292617 +0000 UTC m=+0.224403472 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:21:25 localhost podman[79982]: 2025-11-23 08:21:25.333258609 +0000 UTC m=+0.234030187 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z)
Nov 23 03:21:25 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:21:25 localhost podman[80000]: 2025-11-23 08:21:25.357606105 +0000 UTC m=+0.247605637 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Nov 23 03:21:25 localhost podman[79980]: 2025-11-23 08:21:25.409934503 +0000 UTC m=+0.312955951 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64)
Nov 23 03:21:25 localhost podman[79980]: 2025-11-23 08:21:25.420306748 +0000 UTC m=+0.323328176 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, distribution-scope=public)
Nov 23 03:21:25 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:21:25 localhost podman[79984]: 2025-11-23 08:21:25.479701413 +0000 UTC m=+0.371989616 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:21:25 localhost podman[80000]: 2025-11-23 08:21:25.488134966 +0000 UTC m=+0.378134568 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:11:48Z, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team)
Nov 23 03:21:25 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:21:25 localhost podman[79984]: 2025-11-23 08:21:25.518992585 +0000 UTC m=+0.411280808 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:44:13Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:21:25 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:21:25 localhost podman[79981]: 2025-11-23 08:21:25.540205548 +0000 UTC m=+0.443410791 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step5, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z)
Nov 23 03:21:25 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:21:25 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:21:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:21:29 localhost podman[80119]: 2025-11-23 08:21:29.168174212 +0000 UTC m=+0.075672508 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Nov 23 03:21:29 localhost podman[80119]: 2025-11-23 08:21:29.562350576 +0000 UTC m=+0.469848792 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:21:29 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:21:31 localhost podman[80144]: 2025-11-23 08:21:31.181953949 +0000 UTC m=+0.088306934 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller)
Nov 23 03:21:31 localhost systemd[1]: tmp-crun.OfceQg.mount: Deactivated successfully.
Nov 23 03:21:31 localhost podman[80143]: 2025-11-23 08:21:31.246689365 +0000 UTC m=+0.153376948 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:21:31 localhost podman[80145]: 2025-11-23 08:21:31.209437978 +0000 UTC m=+0.110510542 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible)
Nov 23 03:21:31 localhost podman[80144]: 2025-11-23 08:21:31.264464117 +0000 UTC m=+0.170817122 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:21:31 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:21:31 localhost podman[80143]: 2025-11-23 08:21:31.317978157 +0000 UTC m=+0.224665680 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:21:31 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:21:31 localhost podman[80145]: 2025-11-23 08:21:31.401959394 +0000 UTC m=+0.303031948 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Nov 23 03:21:31 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:21:31 localhost python3[80235]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 23 03:21:35 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 03:21:35 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 03:21:35 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 03:21:36 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 03:21:36 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 03:21:36 localhost systemd[1]: run-r901a7ea630db49f49681cc24f1ae6d76.service: Deactivated successfully.
Nov 23 03:21:36 localhost systemd[1]: run-r2d0714d9853a4985852763d0455b2eba.service: Deactivated successfully.
Nov 23 03:21:43 localhost sshd[80387]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:21:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:21:56 localhost systemd[1]: tmp-crun.VAKSIn.mount: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80436]: 2025-11-23 08:21:56.205234701 +0000 UTC m=+0.097819784 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:21:56 localhost systemd[1]: tmp-crun.ahPOt2.mount: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80434]: 2025-11-23 08:21:56.24970142 +0000 UTC m=+0.142145210 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, release=1761123044, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:21:56 localhost podman[80434]: 2025-11-23 08:21:56.255966417 +0000 UTC m=+0.148410197 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, container_name=logrotate_crond, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:21:56 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80435]: 2025-11-23 08:21:56.229868704 +0000 UTC m=+0.121949944 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, config_id=tripleo_step5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:21:56 localhost podman[80438]: 2025-11-23 08:21:56.293500932 +0000 UTC m=+0.175796413 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12)
Nov 23 03:21:56 localhost podman[80438]: 2025-11-23 08:21:56.327040782 +0000 UTC m=+0.209336283 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible)
Nov 23 03:21:56 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80435]: 2025-11-23 08:21:56.35977959 +0000 UTC m=+0.251860840 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute)
Nov 23 03:21:56 localhost podman[80455]: 2025-11-23 08:21:56.370149704 +0000 UTC m=+0.243512039 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:21:56 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80436]: 2025-11-23 08:21:56.387029992 +0000 UTC m=+0.279615085 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Nov 23 03:21:56 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:21:56 localhost podman[80437]: 2025-11-23 08:21:56.431160313 +0000 UTC m=+0.317020589 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc.)
Nov 23 03:21:56 localhost podman[80437]: 2025-11-23 08:21:56.443966303 +0000 UTC m=+0.329826639 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public)
Nov 23 03:21:56 localhost podman[80455]: 2025-11-23 08:21:56.450184828 +0000 UTC m=+0.323547203 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:21:56 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:21:56 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:22:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:22:00 localhost systemd[1]: tmp-crun.QFFSvU.mount: Deactivated successfully.
Nov 23 03:22:00 localhost podman[80566]: 2025-11-23 08:22:00.177438715 +0000 UTC m=+0.085376645 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=nova_migration_target)
Nov 23 03:22:00 localhost podman[80566]: 2025-11-23 08:22:00.549123573 +0000 UTC m=+0.457061463 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:22:00 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:22:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:22:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5172 writes, 23K keys, 5172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5172 writes, 552 syncs, 9.37 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:22:02 localhost systemd[1]: tmp-crun.9ossUF.mount: Deactivated successfully.
Nov 23 03:22:02 localhost podman[80592]: 2025-11-23 08:22:02.193329917 +0000 UTC m=+0.095570455 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12)
Nov 23 03:22:02 localhost podman[80591]: 2025-11-23 08:22:02.250618596 +0000 UTC m=+0.152553616 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Nov 23 03:22:02 localhost podman[80590]: 2025-11-23 08:22:02.294428858 +0000 UTC m=+0.198387902 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:22:02 localhost podman[80591]: 2025-11-23 08:22:02.302975945 +0000 UTC m=+0.204910965 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:22:02 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:22:02 localhost podman[80590]: 2025-11-23 08:22:02.379181365 +0000 UTC m=+0.283140419 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:22:02 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:22:02 localhost podman[80592]: 2025-11-23 08:22:02.418941121 +0000 UTC m=+0.321181679 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:22:02 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:22:03 localhost systemd[1]: tmp-crun.HO45fE.mount: Deactivated successfully.
Nov 23 03:22:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:22:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4448 writes, 20K keys, 4448 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4448 writes, 502 syncs, 8.86 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:22:09 localhost sshd[80661]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:22:18 localhost python3[80741]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:22:21 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:22:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:22:27 localhost podman[80946]: 2025-11-23 08:22:27.209945452 +0000 UTC m=+0.108129008 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, distribution-scope=public, io.openshift.expose-services=)
Nov 23 03:22:27 localhost systemd[1]: tmp-crun.02zOcZ.mount: Deactivated successfully.
Nov 23 03:22:27 localhost podman[80945]: 2025-11-23 08:22:27.26305474 +0000 UTC m=+0.160721843 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible)
Nov 23 03:22:27 localhost podman[80945]: 2025-11-23 08:22:27.276881797 +0000 UTC m=+0.174548900 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., tcib_managed=true, container_name=logrotate_crond, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:22:27 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:22:27 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:22:27 localhost recover_tripleo_nova_virtqemud[81036]: 61733
Nov 23 03:22:27 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:22:27 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:22:27 localhost podman[80960]: 2025-11-23 08:22:27.32150984 +0000 UTC m=+0.208048098 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:22:27 localhost podman[80960]: 2025-11-23 08:22:27.340246038 +0000 UTC m=+0.226784346 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Nov 23 03:22:27 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:22:27 localhost podman[80947]: 2025-11-23 08:22:27.355280016 +0000 UTC m=+0.249376105 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:22:27 localhost podman[80967]: 2025-11-23 08:22:27.425669473 +0000 UTC m=+0.306478119 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute)
Nov 23 03:22:27 localhost podman[80967]: 2025-11-23 08:22:27.4572348 +0000 UTC m=+0.338043446 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible)
Nov 23 03:22:27 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:22:27 localhost podman[80948]: 2025-11-23 08:22:27.472876995 +0000 UTC m=+0.361761325 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1)
Nov 23 03:22:27 localhost podman[80948]: 2025-11-23 08:22:27.487876763 +0000 UTC m=+0.376761073 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public)
Nov 23 03:22:27 localhost podman[80946]: 2025-11-23 08:22:27.492327031 +0000 UTC m=+0.390510577 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, distribution-scope=public, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Nov 23 03:22:27 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:22:27 localhost podman[80947]: 2025-11-23 08:22:27.541693751 +0000 UTC m=+0.435789830 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi)
Nov 23 03:22:27 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:22:27 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:22:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:22:31 localhost podman[81081]: 2025-11-23 08:22:31.183191004 +0000 UTC m=+0.085767755 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public)
Nov 23 03:22:31 localhost podman[81081]: 2025-11-23 08:22:31.547999349 +0000 UTC m=+0.450576070 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Nov 23 03:22:31 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:22:33 localhost systemd[1]: tmp-crun.q1jQ2b.mount: Deactivated successfully.
Nov 23 03:22:33 localhost podman[81104]: 2025-11-23 08:22:33.199051666 +0000 UTC m=+0.102022437 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:22:33 localhost podman[81106]: 2025-11-23 08:22:33.244052829 +0000 UTC m=+0.139632324 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, architecture=x86_64, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:22:33 localhost podman[81104]: 2025-11-23 08:22:33.249556085 +0000 UTC m=+0.152526856 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Nov 23 03:22:33 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:22:33 localhost podman[81105]: 2025-11-23 08:22:33.299657454 +0000 UTC m=+0.198086055 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044)
Nov 23 03:22:33 localhost podman[81105]: 2025-11-23 08:22:33.327926793 +0000 UTC m=+0.226355404 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true)
Nov 23 03:22:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:22:33 localhost podman[81106]: 2025-11-23 08:22:33.463011346 +0000 UTC m=+0.358590841 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1)
Nov 23 03:22:33 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:22:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:22:58 localhost podman[81228]: 2025-11-23 08:22:58.207646806 +0000 UTC m=+0.110563833 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team)
Nov 23 03:22:58 localhost podman[81227]: 2025-11-23 08:22:58.244588925 +0000 UTC m=+0.149089324 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:22:58 localhost podman[81228]: 2025-11-23 08:22:58.268818028 +0000 UTC m=+0.171735055 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:22:58 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:22:58 localhost podman[81227]: 2025-11-23 08:22:58.295992099 +0000 UTC m=+0.200492438 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., version=17.1.12)
Nov 23 03:22:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:22:58 localhost systemd[1]: tmp-crun.WDZODA.mount: Deactivated successfully.
Nov 23 03:22:58 localhost podman[81229]: 2025-11-23 08:22:58.361707201 +0000 UTC m=+0.262687907 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com)
Nov 23 03:22:58 localhost podman[81229]: 2025-11-23 08:22:58.37104515 +0000 UTC m=+0.272025886 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:22:58 localhost podman[81225]: 2025-11-23 08:22:58.404773984 +0000 UTC m=+0.313578467 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, container_name=logrotate_crond, release=1761123044, tcib_managed=true)
Nov 23 03:22:58 localhost podman[81226]: 2025-11-23 08:22:58.453340802 +0000 UTC m=+0.361676223 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.openshift.expose-services=, release=1761123044, container_name=nova_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:22:58 localhost podman[81226]: 2025-11-23 08:22:58.478643243 +0000 UTC m=+0.386978664 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5)
Nov 23 03:22:58 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:22:58 localhost podman[81225]: 2025-11-23 08:22:58.493711792 +0000 UTC m=+0.402516285 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, container_name=logrotate_crond)
Nov 23 03:22:58 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:22:58 localhost podman[81230]: 2025-11-23 08:22:58.473360513 +0000 UTC m=+0.374010860 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, distribution-scope=public)
Nov 23 03:22:58 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:22:58 localhost podman[81230]: 2025-11-23 08:22:58.557081813 +0000 UTC m=+0.457732160 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 03:22:58 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:23:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:23:02 localhost systemd[1]: tmp-crun.3RsHfN.mount: Deactivated successfully.
Nov 23 03:23:02 localhost podman[81359]: 2025-11-23 08:23:02.170720898 +0000 UTC m=+0.078971905 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, version=17.1.12)
Nov 23 03:23:02 localhost podman[81359]: 2025-11-23 08:23:02.53995084 +0000 UTC m=+0.448201877 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team)
Nov 23 03:23:02 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:23:03 localhost sshd[81383]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:23:04 localhost podman[81385]: 2025-11-23 08:23:04.186751214 +0000 UTC m=+0.086733611 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:23:04 localhost podman[81386]: 2025-11-23 08:23:04.253957017 +0000 UTC m=+0.152205459 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, container_name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible)
Nov 23 03:23:04 localhost podman[81385]: 2025-11-23 08:23:04.279443422 +0000 UTC m=+0.179425819 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:23:04 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:23:04 localhost podman[81387]: 2025-11-23 08:23:04.296941986 +0000 UTC m=+0.189273871 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., tcib_managed=true)
Nov 23 03:23:04 localhost podman[81386]: 2025-11-23 08:23:04.302461793 +0000 UTC m=+0.200710225 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:23:04 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:23:04 localhost podman[81387]: 2025-11-23 08:23:04.495947994 +0000 UTC m=+0.388279829 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:23:04 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:23:13 localhost python3[81473]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:23:16 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 03:23:16 localhost rhsm-service[6616]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:23:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:23:29 localhost podman[81791]: 2025-11-23 08:23:29.194334501 +0000 UTC m=+0.088814916 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Nov 23 03:23:29 localhost podman[81814]: 2025-11-23 08:23:29.260053734 +0000 UTC m=+0.141546715 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Nov 23 03:23:29 localhost podman[81800]: 2025-11-23 08:23:29.313515402 +0000 UTC m=+0.199711117 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 03:23:29 localhost podman[81800]: 2025-11-23 08:23:29.319986173 +0000 UTC m=+0.206181918 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:23:29 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:23:29 localhost podman[81814]: 2025-11-23 08:23:29.342058069 +0000 UTC m=+0.223551080 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:23:29 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:23:29 localhost podman[81789]: 2025-11-23 08:23:29.357895049 +0000 UTC m=+0.255429145 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:23:29 localhost podman[81790]: 2025-11-23 08:23:29.28135614 +0000 UTC m=+0.175527447 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:23:29 localhost podman[81791]: 2025-11-23 08:23:29.390608567 +0000 UTC m=+0.285089012 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z)
Nov 23 03:23:29 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:23:29 localhost podman[81789]: 2025-11-23 08:23:29.410952496 +0000 UTC m=+0.308486602 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step5, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 03:23:29 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:23:29 localhost podman[81790]: 2025-11-23 08:23:29.462057161 +0000 UTC m=+0.356228508 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:23:29 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:23:29 localhost podman[81788]: 2025-11-23 08:23:29.479196206 +0000 UTC m=+0.376844595 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git)
Nov 23 03:23:29 localhost podman[81788]: 2025-11-23 08:23:29.51176438 +0000 UTC m=+0.409412769 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc.)
Nov 23 03:23:29 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:23:30 localhost systemd[1]: tmp-crun.iDsQAS.mount: Deactivated successfully.
Nov 23 03:23:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:23:33 localhost podman[81926]: 2025-11-23 08:23:33.176384475 +0000 UTC m=+0.078188995 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public)
Nov 23 03:23:33 localhost podman[81926]: 2025-11-23 08:23:33.546285075 +0000 UTC m=+0.448089595 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:23:33 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:23:34 localhost python3[81961]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:23:35 localhost podman[81964]: 2025-11-23 08:23:35.163291258 +0000 UTC m=+0.068272561 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:23:35 localhost podman[81963]: 2025-11-23 08:23:35.225498927 +0000 UTC m=+0.131331124 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Nov 23 03:23:35 localhost podman[81962]: 2025-11-23 08:23:35.278881613 +0000 UTC m=+0.185245044 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:23:35 localhost podman[81963]: 2025-11-23 08:23:35.303441045 +0000 UTC m=+0.209273242 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Nov 23 03:23:35 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:23:35 localhost podman[81964]: 2025-11-23 08:23:35.328879439 +0000 UTC m=+0.233860742 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:23:35 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:23:35 localhost podman[81962]: 2025-11-23 08:23:35.380005016 +0000 UTC m=+0.286368447 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:23:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:24:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:24:00 localhost podman[82085]: 2025-11-23 08:24:00.244412831 +0000 UTC m=+0.144486812 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:24:00 localhost podman[82085]: 2025-11-23 08:24:00.274469278 +0000 UTC m=+0.174543239 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.12, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5)
Nov 23 03:24:00 localhost podman[82094]: 2025-11-23 08:24:00.2108194 +0000 UTC m=+0.100001142 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:24:00 localhost podman[82089]: 2025-11-23 08:24:00.311700036 +0000 UTC m=+0.207743621 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:24:00 localhost podman[82092]: 2025-11-23 08:24:00.368318708 +0000 UTC m=+0.264174248 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:24:00 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:24:00 localhost podman[82084]: 2025-11-23 08:24:00.290032282 +0000 UTC m=+0.200886439 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible)
Nov 23 03:24:00 localhost podman[82094]: 2025-11-23 08:24:00.391874942 +0000 UTC m=+0.281056704 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:24:00 localhost podman[82089]: 2025-11-23 08:24:00.392341294 +0000 UTC m=+0.288384889 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:24:00 localhost podman[82092]: 2025-11-23 08:24:00.404302201 +0000 UTC m=+0.300157741 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public)
Nov 23 03:24:00 localhost podman[82093]: 2025-11-23 08:24:00.413478205 +0000 UTC m=+0.306921031 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1)
Nov 23 03:24:00 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:24:00 localhost podman[82084]: 2025-11-23 08:24:00.425019331 +0000 UTC m=+0.335873498 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:24:00 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:24:00 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:24:00 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:24:00 localhost podman[82093]: 2025-11-23 08:24:00.548596369 +0000 UTC m=+0.442039195 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, version=17.1.12, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true)
Nov 23 03:24:00 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:24:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:24:04 localhost podman[82219]: 2025-11-23 08:24:04.173085031 +0000 UTC m=+0.078165524 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Nov 23 03:24:04 localhost podman[82219]: 2025-11-23 08:24:04.543233267 +0000 UTC m=+0.448313870 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:24:04 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:24:06 localhost systemd[1]: tmp-crun.pH7oW7.mount: Deactivated successfully.
Nov 23 03:24:06 localhost podman[82242]: 2025-11-23 08:24:06.186006664 +0000 UTC m=+0.095670848 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 23 03:24:06 localhost podman[82242]: 2025-11-23 08:24:06.227752931 +0000 UTC m=+0.137417145 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Nov 23 03:24:06 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:24:06 localhost podman[82244]: 2025-11-23 08:24:06.25111414 +0000 UTC m=+0.151807647 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Nov 23 03:24:06 localhost podman[82243]: 2025-11-23 08:24:06.340199693 +0000 UTC m=+0.241967438 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, version=17.1.12, architecture=x86_64, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc.)
Nov 23 03:24:06 localhost podman[82243]: 2025-11-23 08:24:06.362237097 +0000 UTC m=+0.264004842 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:24:06 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:24:06 localhost podman[82244]: 2025-11-23 08:24:06.450864818 +0000 UTC m=+0.351558255 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 23 03:24:06 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:24:29 localhost sshd[82394]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:24:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:24:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:24:31 localhost recover_tripleo_nova_virtqemud[82434]: 61733
Nov 23 03:24:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:24:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:24:31 localhost podman[82399]: 2025-11-23 08:24:31.217549587 +0000 UTC m=+0.102592042 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, config_id=tripleo_step3, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.buildah.version=1.41.4)
Nov 23 03:24:31 localhost podman[82399]: 2025-11-23 08:24:31.235921204 +0000 UTC m=+0.120963679 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:24:31 localhost podman[82398]: 2025-11-23 08:24:31.268626402 +0000 UTC m=+0.155267890 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com)
Nov 23 03:24:31 localhost podman[82400]: 2025-11-23 08:24:31.318625268 +0000 UTC m=+0.198358142 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4)
Nov 23 03:24:31 localhost podman[82398]: 2025-11-23 08:24:31.321903785 +0000 UTC m=+0.208545203 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:24:31 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:24:31 localhost podman[82400]: 2025-11-23 08:24:31.333149673 +0000 UTC m=+0.212882607 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:24:31 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:24:31 localhost podman[82414]: 2025-11-23 08:24:31.379751009 +0000 UTC m=+0.252810316 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:24:31 localhost podman[82414]: 2025-11-23 08:24:31.410010321 +0000 UTC m=+0.283069608 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z)
Nov 23 03:24:31 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:24:31 localhost podman[82397]: 2025-11-23 08:24:31.42807246 +0000 UTC m=+0.317601404 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:24:31 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:24:31 localhost podman[82397]: 2025-11-23 08:24:31.483010897 +0000 UTC m=+0.372539871 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:24:31 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:24:31 localhost podman[82396]: 2025-11-23 08:24:31.526777018 +0000 UTC m=+0.412123071 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Nov 23 03:24:31 localhost podman[82396]: 2025-11-23 08:24:31.562864865 +0000 UTC m=+0.448210948 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron)
Nov 23 03:24:31 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:24:34 localhost systemd[1]: session-34.scope: Deactivated successfully.
Nov 23 03:24:34 localhost systemd[1]: session-34.scope: Consumed 19.670s CPU time.
Nov 23 03:24:34 localhost systemd-logind[761]: Session 34 logged out. Waiting for processes to exit.
Nov 23 03:24:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:24:34 localhost systemd-logind[761]: Removed session 34.
Nov 23 03:24:34 localhost podman[82537]: 2025-11-23 08:24:34.963295775 +0000 UTC m=+0.111457197 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=)
Nov 23 03:24:35 localhost podman[82537]: 2025-11-23 08:24:35.330552815 +0000 UTC m=+0.478714187 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Nov 23 03:24:35 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:24:37 localhost systemd[1]: tmp-crun.u4kWW6.mount: Deactivated successfully.
Nov 23 03:24:37 localhost podman[82561]: 2025-11-23 08:24:37.19312175 +0000 UTC m=+0.096995014 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, container_name=ovn_controller, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:24:37 localhost systemd[1]: tmp-crun.G0gB0x.mount: Deactivated successfully.
Nov 23 03:24:37 localhost podman[82561]: 2025-11-23 08:24:37.23950751 +0000 UTC m=+0.143380724 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 23 03:24:37 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:24:37 localhost podman[82560]: 2025-11-23 08:24:37.291116909 +0000 UTC m=+0.197914900 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:24:37 localhost podman[82562]: 2025-11-23 08:24:37.244311727 +0000 UTC m=+0.142011217 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team)
Nov 23 03:24:37 localhost podman[82560]: 2025-11-23 08:24:37.356073161 +0000 UTC m=+0.262871232 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent)
Nov 23 03:24:37 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:24:37 localhost podman[82562]: 2025-11-23 08:24:37.436579117 +0000 UTC m=+0.334278647 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Nov 23 03:24:37 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:25:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:25:02 localhost podman[82685]: 2025-11-23 08:25:02.203802625 +0000 UTC m=+0.098163197 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044)
Nov 23 03:25:02 localhost podman[82684]: 2025-11-23 08:25:02.244993839 +0000 UTC m=+0.140973324 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Nov 23 03:25:02 localhost podman[82684]: 2025-11-23 08:25:02.325293105 +0000 UTC m=+0.221272550 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:25:02 localhost podman[82686]: 2025-11-23 08:25:02.288186818 +0000 UTC m=+0.181212485 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:25:02 localhost podman[82685]: 2025-11-23 08:25:02.343260878 +0000 UTC m=+0.237621450 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:25:02 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:25:02 localhost podman[82697]: 2025-11-23 08:25:02.354482585 +0000 UTC m=+0.236631556 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:25:02 localhost podman[82687]: 2025-11-23 08:25:02.315619881 +0000 UTC m=+0.202914888 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:25:02 localhost podman[82686]: 2025-11-23 08:25:02.372080508 +0000 UTC m=+0.265106155 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:25:02 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:25:02 localhost podman[82683]: 2025-11-23 08:25:02.332366992 +0000 UTC m=+0.228963743 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:25:02 localhost podman[82697]: 2025-11-23 08:25:02.384128475 +0000 UTC m=+0.266277436 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4)
Nov 23 03:25:02 localhost podman[82687]: 2025-11-23 08:25:02.396296295 +0000 UTC m=+0.283591242 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc.)
Nov 23 03:25:02 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:25:02 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:25:02 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:25:02 localhost podman[82683]: 2025-11-23 08:25:02.462728156 +0000 UTC m=+0.359324887 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Nov 23 03:25:02 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:25:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:25:06 localhost podman[82814]: 2025-11-23 08:25:06.207428204 +0000 UTC m=+0.085931196 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true)
Nov 23 03:25:06 localhost podman[82814]: 2025-11-23 08:25:06.581981193 +0000 UTC m=+0.460484165 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:25:06 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:25:08 localhost podman[82838]: 2025-11-23 08:25:08.239312911 +0000 UTC m=+0.141558830 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_controller, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:25:08 localhost podman[82839]: 2025-11-23 08:25:08.210180964 +0000 UTC m=+0.105976574 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:25:08 localhost podman[82838]: 2025-11-23 08:25:08.269827605 +0000 UTC m=+0.172074004 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:25:08 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:25:08 localhost podman[82837]: 2025-11-23 08:25:08.310475527 +0000 UTC m=+0.214751060 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Nov 23 03:25:08 localhost podman[82837]: 2025-11-23 08:25:08.361016308 +0000 UTC m=+0.265291871 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, release=1761123044, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:25:08 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:25:08 localhost podman[82839]: 2025-11-23 08:25:08.405421917 +0000 UTC m=+0.301217547 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64)
Nov 23 03:25:08 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:25:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:25:22 localhost recover_tripleo_nova_virtqemud[82926]: 61733
Nov 23 03:25:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:25:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:25:23 localhost podman[83012]: 2025-11-23 08:25:23.619523738 +0000 UTC m=+0.104273018 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, release=553, vcs-type=git, version=7, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 03:25:23 localhost podman[83012]: 2025-11-23 08:25:23.729035385 +0000 UTC m=+0.213784705 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=553, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:25:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:25:33 localhost systemd[1]: tmp-crun.l8XM8U.mount: Deactivated successfully.
Nov 23 03:25:33 localhost podman[83163]: 2025-11-23 08:25:33.246961868 +0000 UTC m=+0.145940735 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:25:33 localhost podman[83163]: 2025-11-23 08:25:33.257095405 +0000 UTC m=+0.156074332 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:25:33 localhost podman[83162]: 2025-11-23 08:25:33.213219409 +0000 UTC m=+0.113599194 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:25:33 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:25:33 localhost podman[83162]: 2025-11-23 08:25:33.292710963 +0000 UTC m=+0.193090728 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, container_name=collectd)
Nov 23 03:25:33 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:25:33 localhost podman[83161]: 2025-11-23 08:25:33.348844672 +0000 UTC m=+0.251372633 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:25:33 localhost podman[83164]: 2025-11-23 08:25:33.387877381 +0000 UTC m=+0.284951978 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_compute)
Nov 23 03:25:33 localhost podman[83161]: 2025-11-23 08:25:33.40718275 +0000 UTC m=+0.309710651 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1)
Nov 23 03:25:33 localhost podman[83159]: 2025-11-23 08:25:33.435976809 +0000 UTC m=+0.340718778 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Nov 23 03:25:33 localhost podman[83160]: 2025-11-23 08:25:33.443488996 +0000 UTC m=+0.347689511 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:25:33 localhost podman[83164]: 2025-11-23 08:25:33.469678487 +0000 UTC m=+0.366753184 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:25:33 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:25:33 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:25:33 localhost podman[83160]: 2025-11-23 08:25:33.504914355 +0000 UTC m=+0.409114870 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:25:33 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:25:33 localhost podman[83159]: 2025-11-23 08:25:33.521663446 +0000 UTC m=+0.426405375 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044)
Nov 23 03:25:33 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:25:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:25:37 localhost podman[83298]: 2025-11-23 08:25:37.177853602 +0000 UTC m=+0.082762491 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:25:37 localhost podman[83298]: 2025-11-23 08:25:37.534631532 +0000 UTC m=+0.439540451 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:25:37 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:25:39 localhost systemd[1]: tmp-crun.29a5U2.mount: Deactivated successfully.
Nov 23 03:25:39 localhost podman[83322]: 2025-11-23 08:25:39.174639773 +0000 UTC m=+0.082804422 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible)
Nov 23 03:25:39 localhost systemd[1]: tmp-crun.XXGMSA.mount: Deactivated successfully.
Nov 23 03:25:39 localhost podman[83323]: 2025-11-23 08:25:39.241019423 +0000 UTC m=+0.144143580 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_id=tripleo_step4)
Nov 23 03:25:39 localhost podman[83322]: 2025-11-23 08:25:39.246281901 +0000 UTC m=+0.154446580 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12)
Nov 23 03:25:39 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:25:39 localhost podman[83324]: 2025-11-23 08:25:39.195570325 +0000 UTC m=+0.096631077 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:25:39 localhost podman[83323]: 2025-11-23 08:25:39.383860926 +0000 UTC m=+0.286985043 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:25:39 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:25:39 localhost podman[83324]: 2025-11-23 08:25:39.405802615 +0000 UTC m=+0.306863337 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:25:39 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:25:40 localhost systemd[1]: tmp-crun.TNyYvF.mount: Deactivated successfully.
Nov 23 03:25:46 localhost sshd[83400]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:26:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:26:04 localhost podman[83447]: 2025-11-23 08:26:04.197169942 +0000 UTC m=+0.100606131 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-cron)
Nov 23 03:26:04 localhost podman[83447]: 2025-11-23 08:26:04.20996711 +0000 UTC m=+0.113403369 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond)
Nov 23 03:26:04 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:26:04 localhost systemd[1]: tmp-crun.8KKVsM.mount: Deactivated successfully.
Nov 23 03:26:04 localhost podman[83462]: 2025-11-23 08:26:04.259393931 +0000 UTC m=+0.146453028 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:26:04 localhost podman[83448]: 2025-11-23 08:26:04.306943564 +0000 UTC m=+0.206888080 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, container_name=nova_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:26:04 localhost podman[83448]: 2025-11-23 08:26:04.337837328 +0000 UTC m=+0.237781774 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=)
Nov 23 03:26:04 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:26:04 localhost podman[83456]: 2025-11-23 08:26:04.356362037 +0000 UTC m=+0.249058063 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:26:04 localhost podman[83462]: 2025-11-23 08:26:04.35915117 +0000 UTC m=+0.246210277 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Nov 23 03:26:04 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:26:04 localhost podman[83456]: 2025-11-23 08:26:04.399952156 +0000 UTC m=+0.292648252 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Nov 23 03:26:04 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:26:04 localhost podman[83450]: 2025-11-23 08:26:04.410902404 +0000 UTC m=+0.304671968 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:26:04 localhost podman[83450]: 2025-11-23 08:26:04.423872296 +0000 UTC m=+0.317641860 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64)
Nov 23 03:26:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:26:04 localhost podman[83449]: 2025-11-23 08:26:04.511223467 +0000 UTC m=+0.404545819 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:26:04 localhost podman[83449]: 2025-11-23 08:26:04.56786079 +0000 UTC m=+0.461183102 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:26:04 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:26:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:26:08 localhost systemd[1]: tmp-crun.v6mu8J.mount: Deactivated successfully.
Nov 23 03:26:08 localhost podman[83587]: 2025-11-23 08:26:08.170963876 +0000 UTC m=+0.079030673 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:26:08 localhost podman[83587]: 2025-11-23 08:26:08.544643032 +0000 UTC m=+0.452709839 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:26:08 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:26:10 localhost systemd[1]: tmp-crun.bqOFfr.mount: Deactivated successfully.
Nov 23 03:26:10 localhost podman[83611]: 2025-11-23 08:26:10.253990461 +0000 UTC m=+0.157166381 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Nov 23 03:26:10 localhost podman[83612]: 2025-11-23 08:26:10.297007885 +0000 UTC m=+0.197642638 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:26:10 localhost podman[83611]: 2025-11-23 08:26:10.306870675 +0000 UTC m=+0.210046595 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Nov 23 03:26:10 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:26:10 localhost podman[83610]: 2025-11-23 08:26:10.209683015 +0000 UTC m=+0.116555902 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z)
Nov 23 03:26:10 localhost podman[83610]: 2025-11-23 08:26:10.392908722 +0000 UTC m=+0.299781629 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Nov 23 03:26:10 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:26:10 localhost podman[83612]: 2025-11-23 08:26:10.490875113 +0000 UTC m=+0.391509916 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:26:10 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:26:11 localhost systemd[1]: tmp-crun.u0jmq0.mount: Deactivated successfully.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:26:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:26:35 localhost podman[83783]: 2025-11-23 08:26:35.209345558 +0000 UTC m=+0.090524686 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Nov 23 03:26:35 localhost podman[83763]: 2025-11-23 08:26:35.25610942 +0000 UTC m=+0.156717630 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:26:35 localhost podman[83763]: 2025-11-23 08:26:35.267863199 +0000 UTC m=+0.168471409 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond)
Nov 23 03:26:35 localhost podman[83783]: 2025-11-23 08:26:35.268317002 +0000 UTC m=+0.149496190 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Nov 23 03:26:35 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:26:35 localhost podman[83766]: 2025-11-23 08:26:35.320835345 +0000 UTC m=+0.211154214 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:26:35 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:26:35 localhost podman[83766]: 2025-11-23 08:26:35.40491132 +0000 UTC m=+0.295230159 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z)
Nov 23 03:26:35 localhost podman[83765]: 2025-11-23 08:26:35.414947355 +0000 UTC m=+0.310813600 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, release=1761123044)
Nov 23 03:26:35 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:26:35 localhost podman[83765]: 2025-11-23 08:26:35.448882199 +0000 UTC m=+0.344748464 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible)
Nov 23 03:26:35 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:26:35 localhost podman[83777]: 2025-11-23 08:26:35.469718919 +0000 UTC m=+0.355448467 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:26:35 localhost podman[83777]: 2025-11-23 08:26:35.483009048 +0000 UTC m=+0.368738606 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-type=git)
Nov 23 03:26:35 localhost podman[83764]: 2025-11-23 08:26:35.391729074 +0000 UTC m=+0.291498451 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64)
Nov 23 03:26:35 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:26:35 localhost podman[83764]: 2025-11-23 08:26:35.523974138 +0000 UTC m=+0.423743495 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Nov 23 03:26:35 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:26:36 localhost systemd[1]: tmp-crun.mEKkCl.mount: Deactivated successfully.
Nov 23 03:26:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:26:39 localhost systemd[1]: tmp-crun.3bNS8f.mount: Deactivated successfully.
Nov 23 03:26:39 localhost podman[83898]: 2025-11-23 08:26:39.164048308 +0000 UTC m=+0.075460079 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:26:39 localhost podman[83898]: 2025-11-23 08:26:39.581185039 +0000 UTC m=+0.492596870 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Nov 23 03:26:39 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:26:41 localhost systemd[1]: tmp-crun.PJWd1B.mount: Deactivated successfully.
Nov 23 03:26:41 localhost podman[83921]: 2025-11-23 08:26:41.195208306 +0000 UTC m=+0.100624491 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, config_id=tripleo_step4)
Nov 23 03:26:41 localhost podman[83921]: 2025-11-23 08:26:41.244888296 +0000 UTC m=+0.150304461 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:26:41 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:26:41 localhost podman[83922]: 2025-11-23 08:26:41.247071923 +0000 UTC m=+0.148240656 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:26:41 localhost podman[83922]: 2025-11-23 08:26:41.331046206 +0000 UTC m=+0.232214929 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com)
Nov 23 03:26:41 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:26:41 localhost podman[83923]: 2025-11-23 08:26:41.301389404 +0000 UTC m=+0.200895553 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step1, vcs-type=git, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:26:41 localhost podman[83923]: 2025-11-23 08:26:41.515953758 +0000 UTC m=+0.415459947 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr)
Nov 23 03:26:41 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:26:42 localhost systemd[1]: tmp-crun.mvrAFC.mount: Deactivated successfully.
Nov 23 03:27:04 localhost sshd[84041]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:27:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:27:06 localhost podman[84052]: 2025-11-23 08:27:06.202969336 +0000 UTC m=+0.093116154 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12)
Nov 23 03:27:06 localhost podman[84052]: 2025-11-23 08:27:06.21485976 +0000 UTC m=+0.105006548 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Nov 23 03:27:06 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:27:06 localhost podman[84044]: 2025-11-23 08:27:06.249661397 +0000 UTC m=+0.147879098 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Nov 23 03:27:06 localhost podman[84046]: 2025-11-23 08:27:06.313402387 +0000 UTC m=+0.202631521 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:27:06 localhost podman[84043]: 2025-11-23 08:27:06.341002884 +0000 UTC m=+0.241956187 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:27:06 localhost podman[84043]: 2025-11-23 08:27:06.349775175 +0000 UTC m=+0.250728528 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, url=https://www.redhat.com)
Nov 23 03:27:06 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:27:06 localhost podman[84044]: 2025-11-23 08:27:06.372009631 +0000 UTC m=+0.270227282 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5)
Nov 23 03:27:06 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:27:06 localhost podman[84045]: 2025-11-23 08:27:06.290898294 +0000 UTC m=+0.186318171 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12)
Nov 23 03:27:06 localhost podman[84046]: 2025-11-23 08:27:06.400105491 +0000 UTC m=+0.289334635 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Nov 23 03:27:06 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:27:06 localhost podman[84059]: 2025-11-23 08:27:06.462926686 +0000 UTC m=+0.350176578 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64)
Nov 23 03:27:06 localhost podman[84045]: 2025-11-23 08:27:06.476480073 +0000 UTC m=+0.371899900 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:27:06 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:27:06 localhost podman[84059]: 2025-11-23 08:27:06.495001592 +0000 UTC m=+0.382251494 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_id=tripleo_step4)
Nov 23 03:27:06 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:27:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:27:10 localhost podman[84183]: 2025-11-23 08:27:10.171866542 +0000 UTC m=+0.079783953 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, version=17.1.12, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible)
Nov 23 03:27:10 localhost podman[84183]: 2025-11-23 08:27:10.555000527 +0000 UTC m=+0.462917938 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:27:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:27:10 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:27:10 localhost recover_tripleo_nova_virtqemud[84205]: 61733
Nov 23 03:27:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:27:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:27:12 localhost systemd[1]: tmp-crun.Ja7yHl.mount: Deactivated successfully.
Nov 23 03:27:12 localhost podman[84207]: 2025-11-23 08:27:12.186634978 +0000 UTC m=+0.093794552 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:27:12 localhost systemd[1]: tmp-crun.RaKzSN.mount: Deactivated successfully.
Nov 23 03:27:12 localhost podman[84208]: 2025-11-23 08:27:12.241674728 +0000 UTC m=+0.146988824 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1)
Nov 23 03:27:12 localhost podman[84209]: 2025-11-23 08:27:12.289193971 +0000 UTC m=+0.192268168 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12)
Nov 23 03:27:12 localhost podman[84208]: 2025-11-23 08:27:12.318365079 +0000 UTC m=+0.223679175 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Nov 23 03:27:12 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:27:12 localhost podman[84207]: 2025-11-23 08:27:12.368864569 +0000 UTC m=+0.276024133 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:27:12 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:27:12 localhost podman[84209]: 2025-11-23 08:27:12.507803871 +0000 UTC m=+0.410878098 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:27:12 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:27:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:27:37 localhost podman[84360]: 2025-11-23 08:27:37.196614005 +0000 UTC m=+0.102319517 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Nov 23 03:27:37 localhost podman[84360]: 2025-11-23 08:27:37.201676249 +0000 UTC m=+0.107381701 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Nov 23 03:27:37 localhost systemd[1]: tmp-crun.ZJY3DU.mount: Deactivated successfully.
Nov 23 03:27:37 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:27:37 localhost podman[84362]: 2025-11-23 08:27:37.225676281 +0000 UTC m=+0.122349134 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:27:37 localhost podman[84375]: 2025-11-23 08:27:37.279593002 +0000 UTC m=+0.167658878 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:27:37 localhost podman[84361]: 2025-11-23 08:27:37.34706611 +0000 UTC m=+0.250988554 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Nov 23 03:27:37 localhost podman[84363]: 2025-11-23 08:27:37.31065079 +0000 UTC m=+0.204361375 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container)
Nov 23 03:27:37 localhost podman[84361]: 2025-11-23 08:27:37.380143581 +0000 UTC m=+0.284066065 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:27:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:27:37 localhost podman[84363]: 2025-11-23 08:27:37.393980635 +0000 UTC m=+0.287691220 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z)
Nov 23 03:27:37 localhost podman[84375]: 2025-11-23 08:27:37.413719695 +0000 UTC m=+0.301785571 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12)
Nov 23 03:27:37 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:27:37 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:27:37 localhost podman[84362]: 2025-11-23 08:27:37.463900028 +0000 UTC m=+0.360572881 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.)
Nov 23 03:27:37 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:27:37 localhost podman[84374]: 2025-11-23 08:27:37.508952755 +0000 UTC m=+0.401175221 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Nov 23 03:27:37 localhost podman[84374]: 2025-11-23 08:27:37.547035769 +0000 UTC m=+0.439258245 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:27:37 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:27:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:27:41 localhost podman[84495]: 2025-11-23 08:27:41.168961802 +0000 UTC m=+0.075325335 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public)
Nov 23 03:27:41 localhost podman[84495]: 2025-11-23 08:27:41.56703058 +0000 UTC m=+0.473394083 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:27:41 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:27:43 localhost systemd[1]: tmp-crun.aAbtqJ.mount: Deactivated successfully.
Nov 23 03:27:43 localhost podman[84519]: 2025-11-23 08:27:43.159832548 +0000 UTC m=+0.065041025 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:27:43 localhost podman[84518]: 2025-11-23 08:27:43.225438676 +0000 UTC m=+0.131552526 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:27:43 localhost podman[84519]: 2025-11-23 08:27:43.243980135 +0000 UTC m=+0.149188552 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_controller, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:27:43 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:27:43 localhost podman[84518]: 2025-11-23 08:27:43.254837161 +0000 UTC m=+0.160950971 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com)
Nov 23 03:27:43 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:27:43 localhost podman[84520]: 2025-11-23 08:27:43.196791022 +0000 UTC m=+0.096391321 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:27:43 localhost podman[84520]: 2025-11-23 08:27:43.363972437 +0000 UTC m=+0.263572786 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:27:43 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:27:44 localhost systemd[1]: tmp-crun.UdVkh5.mount: Deactivated successfully.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:28:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:28:08 localhost systemd[1]: tmp-crun.yp2vyN.mount: Deactivated successfully.
Nov 23 03:28:08 localhost podman[84637]: 2025-11-23 08:28:08.196596322 +0000 UTC m=+0.099514104 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:28:08 localhost podman[84637]: 2025-11-23 08:28:08.25838343 +0000 UTC m=+0.161301242 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Nov 23 03:28:08 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:28:08 localhost podman[84648]: 2025-11-23 08:28:08.269464892 +0000 UTC m=+0.155035206 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=)
Nov 23 03:28:08 localhost podman[84648]: 2025-11-23 08:28:08.350896307 +0000 UTC m=+0.236466571 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:28:08 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:28:08 localhost podman[84645]: 2025-11-23 08:28:08.368446189 +0000 UTC m=+0.258001148 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:28:08 localhost podman[84657]: 2025-11-23 08:28:08.321891383 +0000 UTC m=+0.200188346 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:28:08 localhost podman[84638]: 2025-11-23 08:28:08.235921818 +0000 UTC m=+0.133333604 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:28:08 localhost podman[84645]: 2025-11-23 08:28:08.403877553 +0000 UTC m=+0.293432482 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, container_name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:28:08 localhost podman[84639]: 2025-11-23 08:28:08.409614134 +0000 UTC m=+0.305242953 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-11-19T00:12:45Z)
Nov 23 03:28:08 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:28:08 localhost podman[84638]: 2025-11-23 08:28:08.421146908 +0000 UTC m=+0.318558734 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:28:08 localhost podman[84639]: 2025-11-23 08:28:08.441862754 +0000 UTC m=+0.337491593 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Nov 23 03:28:08 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:28:08 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:28:08 localhost podman[84657]: 2025-11-23 08:28:08.460344861 +0000 UTC m=+0.338641804 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.)
Nov 23 03:28:08 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:28:09 localhost systemd[1]: tmp-crun.m6Sde7.mount: Deactivated successfully.
Nov 23 03:28:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:28:12 localhost podman[84774]: 2025-11-23 08:28:12.170005095 +0000 UTC m=+0.077908474 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Nov 23 03:28:12 localhost podman[84774]: 2025-11-23 08:28:12.580846361 +0000 UTC m=+0.488749730 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Nov 23 03:28:12 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:28:14 localhost systemd[1]: tmp-crun.XBL68J.mount: Deactivated successfully.
Nov 23 03:28:14 localhost podman[84798]: 2025-11-23 08:28:14.186513928 +0000 UTC m=+0.095038356 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:28:14 localhost podman[84797]: 2025-11-23 08:28:14.22760093 +0000 UTC m=+0.137276498 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.)
Nov 23 03:28:14 localhost podman[84798]: 2025-11-23 08:28:14.242960965 +0000 UTC m=+0.151485383 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1)
Nov 23 03:28:14 localhost podman[84799]: 2025-11-23 08:28:14.199746006 +0000 UTC m=+0.099745169 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Nov 23 03:28:14 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:28:14 localhost podman[84797]: 2025-11-23 08:28:14.272800941 +0000 UTC m=+0.182476539 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent)
Nov 23 03:28:14 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:28:14 localhost podman[84799]: 2025-11-23 08:28:14.410182811 +0000 UTC m=+0.310181924 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1)
Nov 23 03:28:14 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:28:25 localhost sshd[84872]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:28:28 localhost sshd[84874]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:28:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:28:30 localhost recover_tripleo_nova_virtqemud[84938]: 61733
Nov 23 03:28:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:28:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:28:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:28:39 localhost systemd[1]: tmp-crun.RP2Lf1.mount: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84959]: 2025-11-23 08:28:39.256169279 +0000 UTC m=+0.154003209 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.)
Nov 23 03:28:39 localhost podman[84955]: 2025-11-23 08:28:39.232521356 +0000 UTC m=+0.131792273 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:28:39 localhost podman[84958]: 2025-11-23 08:28:39.304562914 +0000 UTC m=+0.203923584 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:28:39 localhost podman[84955]: 2025-11-23 08:28:39.314414894 +0000 UTC m=+0.213685801 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:28:39 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84959]: 2025-11-23 08:28:39.33856765 +0000 UTC m=+0.236401590 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_compute, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:28:39 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84957]: 2025-11-23 08:28:39.365405497 +0000 UTC m=+0.265128117 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container)
Nov 23 03:28:39 localhost podman[84958]: 2025-11-23 08:28:39.369773623 +0000 UTC m=+0.269134343 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, container_name=iscsid, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, url=https://www.redhat.com)
Nov 23 03:28:39 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84957]: 2025-11-23 08:28:39.402967327 +0000 UTC m=+0.302689967 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Nov 23 03:28:39 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84956]: 2025-11-23 08:28:39.452231445 +0000 UTC m=+0.351805390 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:28:39 localhost podman[84954]: 2025-11-23 08:28:39.507715627 +0000 UTC m=+0.407592751 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, container_name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1)
Nov 23 03:28:39 localhost podman[84956]: 2025-11-23 08:28:39.511993079 +0000 UTC m=+0.411567014 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:28:39 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:28:39 localhost podman[84954]: 2025-11-23 08:28:39.595100539 +0000 UTC m=+0.494977613 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container)
Nov 23 03:28:39 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:28:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:28:43 localhost podman[85089]: 2025-11-23 08:28:43.184956157 +0000 UTC m=+0.089688363 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target)
Nov 23 03:28:43 localhost podman[85089]: 2025-11-23 08:28:43.547245673 +0000 UTC m=+0.451977829 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git)
Nov 23 03:28:43 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:28:45 localhost systemd[1]: tmp-crun.umfNkJ.mount: Deactivated successfully.
Nov 23 03:28:45 localhost podman[85112]: 2025-11-23 08:28:45.189515424 +0000 UTC m=+0.098703572 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:28:45 localhost podman[85112]: 2025-11-23 08:28:45.267039907 +0000 UTC m=+0.176227985 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:28:45 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:28:45 localhost podman[85114]: 2025-11-23 08:28:45.285942825 +0000 UTC m=+0.147156559 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:28:45 localhost podman[85113]: 2025-11-23 08:28:45.248425176 +0000 UTC m=+0.155098207 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Nov 23 03:28:45 localhost podman[85113]: 2025-11-23 08:28:45.327802838 +0000 UTC m=+0.234475879 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, url=https://www.redhat.com)
Nov 23 03:28:45 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:28:45 localhost podman[85114]: 2025-11-23 08:28:45.525889307 +0000 UTC m=+0.387103061 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd)
Nov 23 03:28:45 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:28:46 localhost systemd[1]: tmp-crun.tnRZ4U.mount: Deactivated successfully.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:29:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:29:10 localhost systemd[1]: tmp-crun.jqOOBP.mount: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85239]: 2025-11-23 08:29:10.187824154 +0000 UTC m=+0.078639093 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team)
Nov 23 03:29:10 localhost podman[85239]: 2025-11-23 08:29:10.226842032 +0000 UTC m=+0.117656941 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step3)
Nov 23 03:29:10 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85236]: 2025-11-23 08:29:10.238190521 +0000 UTC m=+0.133209971 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, name=rhosp17/openstack-cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:29:10 localhost podman[85245]: 2025-11-23 08:29:10.266470916 +0000 UTC m=+0.147301982 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:29:10 localhost podman[85236]: 2025-11-23 08:29:10.327965837 +0000 UTC m=+0.222985307 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Nov 23 03:29:10 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85245]: 2025-11-23 08:29:10.35506421 +0000 UTC m=+0.235895336 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 03:29:10 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85258]: 2025-11-23 08:29:10.426947764 +0000 UTC m=+0.297083038 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, config_id=tripleo_step4, release=1761123044, distribution-scope=public, tcib_managed=true)
Nov 23 03:29:10 localhost podman[85238]: 2025-11-23 08:29:10.465049529 +0000 UTC m=+0.354444611 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Nov 23 03:29:10 localhost podman[85258]: 2025-11-23 08:29:10.486975475 +0000 UTC m=+0.357110689 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Nov 23 03:29:10 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85238]: 2025-11-23 08:29:10.541763509 +0000 UTC m=+0.431158641 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12)
Nov 23 03:29:10 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:29:10 localhost podman[85237]: 2025-11-23 08:29:10.292919683 +0000 UTC m=+0.184816571 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Nov 23 03:29:10 localhost podman[85237]: 2025-11-23 08:29:10.630091257 +0000 UTC m=+0.521988105 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Nov 23 03:29:10 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:29:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:29:14 localhost podman[85370]: 2025-11-23 08:29:14.170323677 +0000 UTC m=+0.077277426 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4)
Nov 23 03:29:14 localhost podman[85370]: 2025-11-23 08:29:14.591241628 +0000 UTC m=+0.498195457 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Nov 23 03:29:14 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:29:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:29:16 localhost systemd[1]: tmp-crun.GZNCBm.mount: Deactivated successfully.
Nov 23 03:29:16 localhost podman[85396]: 2025-11-23 08:29:16.185046824 +0000 UTC m=+0.090313771 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z)
Nov 23 03:29:16 localhost podman[85395]: 2025-11-23 08:29:16.232418101 +0000 UTC m=+0.141076767 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:29:16 localhost podman[85396]: 2025-11-23 08:29:16.242025595 +0000 UTC m=+0.147292562 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:29:16 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:29:16 localhost podman[85395]: 2025-11-23 08:29:16.279910352 +0000 UTC m=+0.188569028 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true)
Nov 23 03:29:16 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:29:16 localhost podman[85397]: 2025-11-23 08:29:16.415030933 +0000 UTC m=+0.314548238 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044)
Nov 23 03:29:16 localhost podman[85397]: 2025-11-23 08:29:16.612908616 +0000 UTC m=+0.512425921 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:29:16 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:29:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:29:41 localhost systemd[1]: tmp-crun.X7OsWa.mount: Deactivated successfully.
Nov 23 03:29:41 localhost podman[85565]: 2025-11-23 08:29:41.254193551 +0000 UTC m=+0.128779454 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:29:41 localhost podman[85545]: 2025-11-23 08:29:41.204930014 +0000 UTC m=+0.104850344 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, distribution-scope=public, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:29:41 localhost podman[85546]: 2025-11-23 08:29:41.300929082 +0000 UTC m=+0.195805669 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5)
Nov 23 03:29:41 localhost podman[85565]: 2025-11-23 08:29:41.314835709 +0000 UTC m=+0.189421582 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc.)
Nov 23 03:29:41 localhost podman[85552]: 2025-11-23 08:29:41.270440809 +0000 UTC m=+0.157958142 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:29:41 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:29:41 localhost podman[85546]: 2025-11-23 08:29:41.327145063 +0000 UTC m=+0.222021640 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:29:41 localhost podman[85545]: 2025-11-23 08:29:41.338511472 +0000 UTC m=+0.238431812 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:29:41 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:29:41 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:29:41 localhost podman[85552]: 2025-11-23 08:29:41.349910813 +0000 UTC m=+0.237428216 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Nov 23 03:29:41 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:29:41 localhost podman[85557]: 2025-11-23 08:29:41.304165748 +0000 UTC m=+0.188341063 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:29:41 localhost podman[85547]: 2025-11-23 08:29:41.415523732 +0000 UTC m=+0.306964369 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Nov 23 03:29:41 localhost podman[85557]: 2025-11-23 08:29:41.436946346 +0000 UTC m=+0.321121651 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:29:41 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:29:41 localhost podman[85547]: 2025-11-23 08:29:41.468398195 +0000 UTC m=+0.359838852 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, version=17.1.12)
Nov 23 03:29:41 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:29:42 localhost systemd[1]: tmp-crun.aNSSqt.mount: Deactivated successfully.
Nov 23 03:29:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:29:45 localhost podman[85680]: 2025-11-23 08:29:45.175364739 +0000 UTC m=+0.082792252 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=nova_migration_target, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:29:45 localhost podman[85680]: 2025-11-23 08:29:45.599197785 +0000 UTC m=+0.506625258 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Nov 23 03:29:45 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:29:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:29:47 localhost systemd[1]: tmp-crun.5q3cyu.mount: Deactivated successfully.
Nov 23 03:29:47 localhost podman[85705]: 2025-11-23 08:29:47.187797272 +0000 UTC m=+0.090716861 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12)
Nov 23 03:29:47 localhost podman[85704]: 2025-11-23 08:29:47.244664421 +0000 UTC m=+0.148937645 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, build-date=2025-11-18T23:34:05Z, release=1761123044, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Nov 23 03:29:47 localhost podman[85703]: 2025-11-23 08:29:47.290412806 +0000 UTC m=+0.197142015 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Nov 23 03:29:47 localhost podman[85704]: 2025-11-23 08:29:47.321765482 +0000 UTC m=+0.226038656 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 03:29:47 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:29:47 localhost podman[85703]: 2025-11-23 08:29:47.364830118 +0000 UTC m=+0.271559267 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, architecture=x86_64)
Nov 23 03:29:47 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:29:47 localhost podman[85705]: 2025-11-23 08:29:47.419012155 +0000 UTC m=+0.321931764 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:29:47 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:29:49 localhost sshd[85777]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:30:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:30:12 localhost systemd[1]: tmp-crun.UcU1Xi.mount: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85826]: 2025-11-23 08:30:12.194128114 +0000 UTC m=+0.092939730 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, tcib_managed=true)
Nov 23 03:30:12 localhost podman[85826]: 2025-11-23 08:30:12.24789664 +0000 UTC m=+0.146708226 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:30:12 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85825]: 2025-11-23 08:30:12.286368014 +0000 UTC m=+0.187148452 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Nov 23 03:30:12 localhost podman[85844]: 2025-11-23 08:30:12.249117153 +0000 UTC m=+0.136657232 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Nov 23 03:30:12 localhost podman[85844]: 2025-11-23 08:30:12.331916694 +0000 UTC m=+0.219456753 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:30:12 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85825]: 2025-11-23 08:30:12.341079515 +0000 UTC m=+0.241859923 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Nov 23 03:30:12 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85824]: 2025-11-23 08:30:12.336796503 +0000 UTC m=+0.240359725 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4)
Nov 23 03:30:12 localhost podman[85828]: 2025-11-23 08:30:12.405047161 +0000 UTC m=+0.298006703 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Nov 23 03:30:12 localhost podman[85828]: 2025-11-23 08:30:12.414078458 +0000 UTC m=+0.307037990 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:30:12 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85838]: 2025-11-23 08:30:12.453079746 +0000 UTC m=+0.340762329 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=)
Nov 23 03:30:12 localhost podman[85838]: 2025-11-23 08:30:12.462431763 +0000 UTC m=+0.350114326 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Nov 23 03:30:12 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:30:12 localhost podman[85824]: 2025-11-23 08:30:12.519593269 +0000 UTC m=+0.423156501 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:30:12 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:30:13 localhost systemd[1]: tmp-crun.Z6E7In.mount: Deactivated successfully.
Nov 23 03:30:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:30:16 localhost podman[85956]: 2025-11-23 08:30:16.176447681 +0000 UTC m=+0.077247155 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:30:16 localhost podman[85956]: 2025-11-23 08:30:16.570846113 +0000 UTC m=+0.471645587 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, container_name=nova_migration_target)
Nov 23 03:30:16 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:30:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:30:18 localhost systemd[1]: tmp-crun.xnpNeE.mount: Deactivated successfully.
Nov 23 03:30:18 localhost podman[85982]: 2025-11-23 08:30:18.235733052 +0000 UTC m=+0.133897210 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, version=17.1.12, build-date=2025-11-18T23:34:05Z, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:30:18 localhost podman[85982]: 2025-11-23 08:30:18.258895762 +0000 UTC m=+0.157059930 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:30:18 localhost podman[85981]: 2025-11-23 08:30:18.216852594 +0000 UTC m=+0.124517283 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, config_id=tripleo_step4, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:30:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:30:18 localhost podman[85981]: 2025-11-23 08:30:18.295832565 +0000 UTC m=+0.203497294 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:30:18 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:30:18 localhost podman[85983]: 2025-11-23 08:30:18.368247053 +0000 UTC m=+0.261770539 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Nov 23 03:30:18 localhost podman[85983]: 2025-11-23 08:30:18.588020843 +0000 UTC m=+0.481544309 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, version=17.1.12, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:30:18 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:30:32 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:30:32 localhost recover_tripleo_nova_virtqemud[86076]: 61733
Nov 23 03:30:32 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:30:32 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:30:43 localhost systemd[1]: tmp-crun.xOZCox.mount: Deactivated successfully.
Nov 23 03:30:43 localhost podman[86139]: 2025-11-23 08:30:43.202598063 +0000 UTC m=+0.094440059 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi)
Nov 23 03:30:43 localhost podman[86139]: 2025-11-23 08:30:43.263714994 +0000 UTC m=+0.155557000 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:30:43 localhost podman[86138]: 2025-11-23 08:30:43.23929492 +0000 UTC m=+0.131245789 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Nov 23 03:30:43 localhost podman[86137]: 2025-11-23 08:30:43.307116327 +0000 UTC m=+0.198656156 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z)
Nov 23 03:30:43 localhost podman[86137]: 2025-11-23 08:30:43.318147688 +0000 UTC m=+0.209687517 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:30:43 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:30:43 localhost podman[86141]: 2025-11-23 08:30:43.354123256 +0000 UTC m=+0.238178757 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid)
Nov 23 03:30:43 localhost podman[86150]: 2025-11-23 08:30:43.273770078 +0000 UTC m=+0.152882839 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:30:43 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:30:43 localhost podman[86150]: 2025-11-23 08:30:43.406966938 +0000 UTC m=+0.286079729 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute)
Nov 23 03:30:43 localhost podman[86140]: 2025-11-23 08:30:43.414799525 +0000 UTC m=+0.301984438 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Nov 23 03:30:43 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:30:43 localhost podman[86141]: 2025-11-23 08:30:43.420915816 +0000 UTC m=+0.304971337 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, container_name=iscsid, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z)
Nov 23 03:30:43 localhost podman[86138]: 2025-11-23 08:30:43.421354057 +0000 UTC m=+0.313304936 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:30:43 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:30:43 localhost podman[86140]: 2025-11-23 08:30:43.453999918 +0000 UTC m=+0.341184851 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:30:43 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:30:43 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:30:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:30:47 localhost podman[86268]: 2025-11-23 08:30:47.179978822 +0000 UTC m=+0.087069274 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:30:47 localhost podman[86268]: 2025-11-23 08:30:47.582308592 +0000 UTC m=+0.489399004 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:30:47 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:30:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:30:49 localhost podman[86293]: 2025-11-23 08:30:49.1766402 +0000 UTC m=+0.082171445 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Nov 23 03:30:49 localhost podman[86293]: 2025-11-23 08:30:49.202946134 +0000 UTC m=+0.108477429 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, release=1761123044, version=17.1.12, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 23 03:30:49 localhost podman[86292]: 2025-11-23 08:30:49.229431062 +0000 UTC m=+0.134922496 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:30:49 localhost podman[86292]: 2025-11-23 08:30:49.270212507 +0000 UTC m=+0.175703951 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:30:49 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:30:49 localhost podman[86294]: 2025-11-23 08:30:49.291287411 +0000 UTC m=+0.193637243 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z)
Nov 23 03:30:49 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:30:49 localhost podman[86294]: 2025-11-23 08:30:49.499516489 +0000 UTC m=+0.401866301 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step1)
Nov 23 03:30:49 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:31:10 localhost sshd[86416]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:31:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:31:14 localhost systemd[1]: tmp-crun.ysRZnY.mount: Deactivated successfully.
Nov 23 03:31:14 localhost podman[86419]: 2025-11-23 08:31:14.199441038 +0000 UTC m=+0.091378239 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:31:14 localhost podman[86418]: 2025-11-23 08:31:14.212486661 +0000 UTC m=+0.103205589 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron)
Nov 23 03:31:14 localhost podman[86418]: 2025-11-23 08:31:14.253018469 +0000 UTC m=+0.143737397 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Nov 23 03:31:14 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:31:14 localhost podman[86419]: 2025-11-23 08:31:14.284318064 +0000 UTC m=+0.176255275 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:31:14 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:31:14 localhost podman[86428]: 2025-11-23 08:31:14.255377712 +0000 UTC m=+0.133661893 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public)
Nov 23 03:31:14 localhost podman[86428]: 2025-11-23 08:31:14.339117438 +0000 UTC m=+0.217401629 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:31:14 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:31:14 localhost podman[86433]: 2025-11-23 08:31:14.420958885 +0000 UTC m=+0.296919915 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.openshift.expose-services=)
Nov 23 03:31:14 localhost podman[86421]: 2025-11-23 08:31:14.463191227 +0000 UTC m=+0.347803835 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.openshift.expose-services=, container_name=collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:31:14 localhost podman[86420]: 2025-11-23 08:31:14.466145555 +0000 UTC m=+0.351984736 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:31:14 localhost podman[86433]: 2025-11-23 08:31:14.476605201 +0000 UTC m=+0.352566221 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Nov 23 03:31:14 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:31:14 localhost podman[86420]: 2025-11-23 08:31:14.496179307 +0000 UTC m=+0.382018438 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc.)
Nov 23 03:31:14 localhost podman[86421]: 2025-11-23 08:31:14.504076234 +0000 UTC m=+0.388688892 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:31:14 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:31:14 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:31:15 localhost systemd[1]: tmp-crun.bNbFAv.mount: Deactivated successfully.
Nov 23 03:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:31:18 localhost podman[86552]: 2025-11-23 08:31:18.178433789 +0000 UTC m=+0.082573257 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:31:18 localhost podman[86552]: 2025-11-23 08:31:18.540474498 +0000 UTC m=+0.444613976 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=)
Nov 23 03:31:18 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:31:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:31:20 localhost systemd[1]: tmp-crun.2eVaOC.mount: Deactivated successfully.
Nov 23 03:31:20 localhost podman[86575]: 2025-11-23 08:31:20.182132034 +0000 UTC m=+0.092466387 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:31:20 localhost podman[86576]: 2025-11-23 08:31:20.222888228 +0000 UTC m=+0.131962878 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:31:20 localhost podman[86577]: 2025-11-23 08:31:20.276679085 +0000 UTC m=+0.179568992 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Nov 23 03:31:20 localhost podman[86575]: 2025-11-23 08:31:20.301820987 +0000 UTC m=+0.212155340 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Nov 23 03:31:20 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:31:20 localhost podman[86576]: 2025-11-23 08:31:20.328829299 +0000 UTC m=+0.237903969 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc.)
Nov 23 03:31:20 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:31:20 localhost podman[86577]: 2025-11-23 08:31:20.470500561 +0000 UTC m=+0.373390418 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team)
Nov 23 03:31:20 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:31:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:31:45 localhost systemd[1]: tmp-crun.DlF882.mount: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86726]: 2025-11-23 08:31:45.204432566 +0000 UTC m=+0.107509553 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:31:45 localhost podman[86740]: 2025-11-23 08:31:45.216892075 +0000 UTC m=+0.106102446 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com)
Nov 23 03:31:45 localhost podman[86726]: 2025-11-23 08:31:45.217960713 +0000 UTC m=+0.121037690 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Nov 23 03:31:45 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86729]: 2025-11-23 08:31:45.265409963 +0000 UTC m=+0.156922935 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:31:45 localhost podman[86729]: 2025-11-23 08:31:45.277903333 +0000 UTC m=+0.169416245 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true)
Nov 23 03:31:45 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86728]: 2025-11-23 08:31:45.307092302 +0000 UTC m=+0.203961115 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:31:45 localhost podman[86727]: 2025-11-23 08:31:45.356637147 +0000 UTC m=+0.256586841 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git)
Nov 23 03:31:45 localhost podman[86728]: 2025-11-23 08:31:45.368051698 +0000 UTC m=+0.264920511 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:31:45 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86727]: 2025-11-23 08:31:45.412989842 +0000 UTC m=+0.312939486 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Nov 23 03:31:45 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86740]: 2025-11-23 08:31:45.454967678 +0000 UTC m=+0.344178049 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, container_name=iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:31:45 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:31:45 localhost podman[86746]: 2025-11-23 08:31:45.472684725 +0000 UTC m=+0.356504444 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Nov 23 03:31:45 localhost podman[86746]: 2025-11-23 08:31:45.505981702 +0000 UTC m=+0.389801371 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:31:45 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:31:49 localhost systemd[1]: tmp-crun.7ybLw3.mount: Deactivated successfully.
Nov 23 03:31:49 localhost podman[86862]: 2025-11-23 08:31:49.172328936 +0000 UTC m=+0.072813859 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Nov 23 03:31:49 localhost podman[86862]: 2025-11-23 08:31:49.530077652 +0000 UTC m=+0.430562595 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:31:49 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:31:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:31:51 localhost podman[86885]: 2025-11-23 08:31:51.189347271 +0000 UTC m=+0.092224871 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, vcs-type=git, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Nov 23 03:31:51 localhost systemd[1]: tmp-crun.9MYO3X.mount: Deactivated successfully.
Nov 23 03:31:51 localhost podman[86887]: 2025-11-23 08:31:51.244488844 +0000 UTC m=+0.143179704 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1)
Nov 23 03:31:51 localhost podman[86885]: 2025-11-23 08:31:51.249227319 +0000 UTC m=+0.152104859 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Nov 23 03:31:51 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:31:51 localhost systemd[1]: tmp-crun.xhc75H.mount: Deactivated successfully.
Nov 23 03:31:51 localhost podman[86886]: 2025-11-23 08:31:51.33426868 +0000 UTC m=+0.235580408 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:31:51 localhost podman[86886]: 2025-11-23 08:31:51.361480396 +0000 UTC m=+0.262792134 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., version=17.1.12)
Nov 23 03:31:51 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:31:51 localhost podman[86887]: 2025-11-23 08:31:51.485784322 +0000 UTC m=+0.384475092 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:31:51 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:32:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:32:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 468 writes, 1742 keys, 468 commit groups, 1.0 writes per commit group, ingest: 2.21 MB, 0.00 MB/s#012Interval WAL: 468 writes, 172 syncs, 2.72 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:32:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:32:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 481 writes, 1969 keys, 481 commit groups, 1.0 writes per commit group, ingest: 2.31 MB, 0.00 MB/s#012Interval WAL: 481 writes, 182 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:32:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:32:16 localhost systemd[1]: tmp-crun.mQELZ0.mount: Deactivated successfully.
Nov 23 03:32:16 localhost podman[87016]: 2025-11-23 08:32:16.208695236 +0000 UTC m=+0.091698247 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Nov 23 03:32:16 localhost podman[87016]: 2025-11-23 08:32:16.223634689 +0000 UTC m=+0.106637730 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, architecture=x86_64)
Nov 23 03:32:16 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:32:16 localhost podman[87009]: 2025-11-23 08:32:16.303940856 +0000 UTC m=+0.193931281 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044)
Nov 23 03:32:16 localhost podman[87010]: 2025-11-23 08:32:16.362763916 +0000 UTC m=+0.250295936 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.buildah.version=1.41.4, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:32:16 localhost podman[87010]: 2025-11-23 08:32:16.376811836 +0000 UTC m=+0.264343846 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=)
Nov 23 03:32:16 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:32:16 localhost podman[87008]: 2025-11-23 08:32:16.461086316 +0000 UTC m=+0.353915356 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5)
Nov 23 03:32:16 localhost podman[87007]: 2025-11-23 08:32:16.513111288 +0000 UTC m=+0.409200283 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., container_name=logrotate_crond, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:32:16 localhost podman[87009]: 2025-11-23 08:32:16.534293825 +0000 UTC m=+0.424284290 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 03:32:16 localhost podman[87022]: 2025-11-23 08:32:16.278412393 +0000 UTC m=+0.158256241 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:32:16 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:32:16 localhost podman[87007]: 2025-11-23 08:32:16.546427054 +0000 UTC m=+0.442516059 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, distribution-scope=public, container_name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64)
Nov 23 03:32:16 localhost podman[87008]: 2025-11-23 08:32:16.565356663 +0000 UTC m=+0.458185693 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:32:16 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:32:16 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:32:16 localhost podman[87022]: 2025-11-23 08:32:16.803202091 +0000 UTC m=+0.683045889 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:32:16 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:32:20 localhost podman[87146]: 2025-11-23 08:32:20.181280858 +0000 UTC m=+0.085961066 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Nov 23 03:32:20 localhost podman[87146]: 2025-11-23 08:32:20.575714101 +0000 UTC m=+0.480394289 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container)
Nov 23 03:32:20 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:32:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:32:22 localhost podman[87169]: 2025-11-23 08:32:22.181967094 +0000 UTC m=+0.083626884 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git)
Nov 23 03:32:22 localhost podman[87171]: 2025-11-23 08:32:22.200102933 +0000 UTC m=+0.092593921 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1)
Nov 23 03:32:22 localhost podman[87169]: 2025-11-23 08:32:22.238168145 +0000 UTC m=+0.139827955 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:32:22 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:32:22 localhost podman[87170]: 2025-11-23 08:32:22.243374512 +0000 UTC m=+0.138244593 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com)
Nov 23 03:32:22 localhost podman[87170]: 2025-11-23 08:32:22.325629909 +0000 UTC m=+0.220499930 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller)
Nov 23 03:32:22 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:32:22 localhost podman[87171]: 2025-11-23 08:32:22.435283579 +0000 UTC m=+0.327774607 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044)
Nov 23 03:32:22 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:32:23 localhost systemd[1]: tmp-crun.8z1NdH.mount: Deactivated successfully.
Nov 23 03:32:32 localhost sshd[87243]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:32:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:32:34 localhost recover_tripleo_nova_virtqemud[87246]: 61733
Nov 23 03:32:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:32:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:32:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:32:47 localhost podman[87327]: 2025-11-23 08:32:47.175761615 +0000 UTC m=+0.064112650 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3)
Nov 23 03:32:47 localhost podman[87327]: 2025-11-23 08:32:47.190717989 +0000 UTC m=+0.079069024 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:32:47 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:32:47 localhost podman[87326]: 2025-11-23 08:32:47.235453458 +0000 UTC m=+0.125761304 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:32:47 localhost podman[87326]: 2025-11-23 08:32:47.244745393 +0000 UTC m=+0.135053279 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:32:47 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:32:47 localhost podman[87332]: 2025-11-23 08:32:47.261204447 +0000 UTC m=+0.141538651 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com)
Nov 23 03:32:47 localhost podman[87324]: 2025-11-23 08:32:47.294010941 +0000 UTC m=+0.188060886 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible)
Nov 23 03:32:47 localhost podman[87323]: 2025-11-23 08:32:47.307828615 +0000 UTC m=+0.200242557 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:32:47 localhost podman[87323]: 2025-11-23 08:32:47.341369879 +0000 UTC m=+0.233783801 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-cron)
Nov 23 03:32:47 localhost podman[87332]: 2025-11-23 08:32:47.34179135 +0000 UTC m=+0.222125584 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Nov 23 03:32:47 localhost podman[87324]: 2025-11-23 08:32:47.353462508 +0000 UTC m=+0.247512483 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Nov 23 03:32:47 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:32:47 localhost podman[87325]: 2025-11-23 08:32:47.366634534 +0000 UTC m=+0.255604825 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Nov 23 03:32:47 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:32:47 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:32:47 localhost podman[87325]: 2025-11-23 08:32:47.444133647 +0000 UTC m=+0.333103918 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible)
Nov 23 03:32:47 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:32:48 localhost systemd[1]: tmp-crun.w5YfEY.mount: Deactivated successfully.
Nov 23 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:32:51 localhost systemd[1]: tmp-crun.wNse9D.mount: Deactivated successfully.
Nov 23 03:32:51 localhost podman[87466]: 2025-11-23 08:32:51.205264278 +0000 UTC m=+0.109830385 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:32:51 localhost podman[87466]: 2025-11-23 08:32:51.54196361 +0000 UTC m=+0.446529727 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:32:51 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:32:53 localhost podman[87490]: 2025-11-23 08:32:53.191484283 +0000 UTC m=+0.095415846 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Nov 23 03:32:53 localhost systemd[1]: tmp-crun.44AVSe.mount: Deactivated successfully.
Nov 23 03:32:53 localhost podman[87492]: 2025-11-23 08:32:53.249569612 +0000 UTC m=+0.146722716 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12)
Nov 23 03:32:53 localhost podman[87490]: 2025-11-23 08:32:53.273006551 +0000 UTC m=+0.176938124 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.)
Nov 23 03:32:53 localhost podman[87491]: 2025-11-23 08:32:53.299700653 +0000 UTC m=+0.198218553 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:32:53 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:32:53 localhost podman[87491]: 2025-11-23 08:32:53.377251427 +0000 UTC m=+0.275769307 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc.)
Nov 23 03:32:53 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:32:53 localhost podman[87492]: 2025-11-23 08:32:53.460170161 +0000 UTC m=+0.357323245 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Nov 23 03:32:53 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:33:18 localhost podman[87611]: 2025-11-23 08:33:18.209979914 +0000 UTC m=+0.108796037 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:33:18 localhost podman[87613]: 2025-11-23 08:33:18.254565549 +0000 UTC m=+0.147267500 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1)
Nov 23 03:33:18 localhost podman[87613]: 2025-11-23 08:33:18.281936991 +0000 UTC m=+0.174638942 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true)
Nov 23 03:33:18 localhost podman[87614]: 2025-11-23 08:33:18.29822153 +0000 UTC m=+0.187336157 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Nov 23 03:33:18 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:33:18 localhost podman[87611]: 2025-11-23 08:33:18.321220216 +0000 UTC m=+0.220036279 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, release=1761123044, name=rhosp17/openstack-cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Nov 23 03:33:18 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:33:18 localhost podman[87620]: 2025-11-23 08:33:18.362233307 +0000 UTC m=+0.246643610 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.12)
Nov 23 03:33:18 localhost podman[87620]: 2025-11-23 08:33:18.401668696 +0000 UTC m=+0.286078949 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, tcib_managed=true)
Nov 23 03:33:18 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:33:18 localhost podman[87612]: 2025-11-23 08:33:18.36580371 +0000 UTC m=+0.261530611 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible)
Nov 23 03:33:18 localhost podman[87626]: 2025-11-23 08:33:18.423304565 +0000 UTC m=+0.307312627 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:33:18 localhost podman[87614]: 2025-11-23 08:33:18.439975854 +0000 UTC m=+0.329090541 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, container_name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:33:18 localhost podman[87612]: 2025-11-23 08:33:18.451962601 +0000 UTC m=+0.347689512 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Nov 23 03:33:18 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:33:18 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:33:18 localhost podman[87626]: 2025-11-23 08:33:18.508281075 +0000 UTC m=+0.392289177 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute)
Nov 23 03:33:18 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:33:22 localhost podman[87739]: 2025-11-23 08:33:22.189993068 +0000 UTC m=+0.090958274 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute)
Nov 23 03:33:22 localhost podman[87739]: 2025-11-23 08:33:22.564072368 +0000 UTC m=+0.465037534 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:33:22 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:33:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:33:24 localhost podman[87762]: 2025-11-23 08:33:24.187087646 +0000 UTC m=+0.089475125 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:33:24 localhost podman[87762]: 2025-11-23 08:33:24.234952127 +0000 UTC m=+0.137339576 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:33:24 localhost systemd[1]: tmp-crun.xB1Ueq.mount: Deactivated successfully.
Nov 23 03:33:24 localhost podman[87763]: 2025-11-23 08:33:24.251826148 +0000 UTC m=+0.152727547 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., release=1761123044)
Nov 23 03:33:24 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:33:24 localhost podman[87763]: 2025-11-23 08:33:24.279119739 +0000 UTC m=+0.180021098 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller)
Nov 23 03:33:24 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:33:24 localhost podman[87764]: 2025-11-23 08:33:24.295717273 +0000 UTC m=+0.191325151 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12)
Nov 23 03:33:24 localhost podman[87764]: 2025-11-23 08:33:24.523829206 +0000 UTC m=+0.419437084 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:33:24 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:33:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:33:49 localhost podman[87968]: 2025-11-23 08:33:49.174716815 +0000 UTC m=+0.069713997 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, config_id=tripleo_step3, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Nov 23 03:33:49 localhost podman[87968]: 2025-11-23 08:33:49.184625349 +0000 UTC m=+0.079622561 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12)
Nov 23 03:33:49 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:33:49 localhost podman[87982]: 2025-11-23 08:33:49.23434905 +0000 UTC m=+0.120675280 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Nov 23 03:33:49 localhost systemd[1]: tmp-crun.yp3R0l.mount: Deactivated successfully.
Nov 23 03:33:49 localhost podman[87982]: 2025-11-23 08:33:49.294949932 +0000 UTC m=+0.181276182 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:33:49 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:33:49 localhost podman[87966]: 2025-11-23 08:33:49.350439726 +0000 UTC m=+0.246861757 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:33:49 localhost podman[87965]: 2025-11-23 08:33:49.299470352 +0000 UTC m=+0.198577664 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:33:49 localhost podman[87964]: 2025-11-23 08:33:49.26535723 +0000 UTC m=+0.164170665 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:33:49 localhost podman[87964]: 2025-11-23 08:33:49.397949167 +0000 UTC m=+0.296762592 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:33:49 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:33:49 localhost podman[87967]: 2025-11-23 08:33:49.413830712 +0000 UTC m=+0.308842774 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Nov 23 03:33:49 localhost podman[87967]: 2025-11-23 08:33:49.424597921 +0000 UTC m=+0.319610023 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team)
Nov 23 03:33:49 localhost podman[87966]: 2025-11-23 08:33:49.434600268 +0000 UTC m=+0.331022289 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container)
Nov 23 03:33:49 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:33:49 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:33:49 localhost podman[87965]: 2025-11-23 08:33:49.480958769 +0000 UTC m=+0.380066111 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Nov 23 03:33:49 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:33:53 localhost podman[88097]: 2025-11-23 08:33:53.165943831 +0000 UTC m=+0.075542593 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:33:53 localhost podman[88097]: 2025-11-23 08:33:53.544097229 +0000 UTC m=+0.453696041 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4)
Nov 23 03:33:53 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:33:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:33:55 localhost systemd[1]: tmp-crun.Sr2pnO.mount: Deactivated successfully.
Nov 23 03:33:55 localhost podman[88122]: 2025-11-23 08:33:55.172644346 +0000 UTC m=+0.075746028 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:33:55 localhost podman[88120]: 2025-11-23 08:33:55.225021777 +0000 UTC m=+0.135967529 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:33:55 localhost podman[88121]: 2025-11-23 08:33:55.292996557 +0000 UTC m=+0.196582541 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Nov 23 03:33:55 localhost podman[88120]: 2025-11-23 08:33:55.31779214 +0000 UTC m=+0.228737902 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:33:55 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:33:55 localhost podman[88122]: 2025-11-23 08:33:55.351466511 +0000 UTC m=+0.254568213 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:33:55 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:33:55 localhost podman[88121]: 2025-11-23 08:33:55.368278671 +0000 UTC m=+0.271864635 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044)
Nov 23 03:33:55 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:33:55 localhost sshd[88196]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:34:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:34:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:34:20 localhost recover_tripleo_nova_virtqemud[88282]: 61733
Nov 23 03:34:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:34:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:34:20 localhost podman[88262]: 2025-11-23 08:34:20.202909239 +0000 UTC m=+0.085444738 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Nov 23 03:34:20 localhost podman[88245]: 2025-11-23 08:34:20.246036333 +0000 UTC m=+0.140268264 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:34:20 localhost podman[88245]: 2025-11-23 08:34:20.305471753 +0000 UTC m=+0.199703634 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:34:20 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:34:20 localhost podman[88252]: 2025-11-23 08:34:20.272401009 +0000 UTC m=+0.157112556 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com)
Nov 23 03:34:20 localhost podman[88252]: 2025-11-23 08:34:20.352585404 +0000 UTC m=+0.237296931 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, container_name=iscsid, vcs-type=git, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:34:20 localhost podman[88243]: 2025-11-23 08:34:20.309124101 +0000 UTC m=+0.208215863 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git)
Nov 23 03:34:20 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:34:20 localhost podman[88244]: 2025-11-23 08:34:20.365079089 +0000 UTC m=+0.262010503 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:34:20 localhost podman[88246]: 2025-11-23 08:34:20.424211331 +0000 UTC m=+0.314150557 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:34:20 localhost podman[88246]: 2025-11-23 08:34:20.433059387 +0000 UTC m=+0.322998573 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, vcs-type=git)
Nov 23 03:34:20 localhost podman[88243]: 2025-11-23 08:34:20.441836372 +0000 UTC m=+0.340928204 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, version=17.1.12)
Nov 23 03:34:20 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:34:20 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:34:20 localhost podman[88262]: 2025-11-23 08:34:20.487016941 +0000 UTC m=+0.369552480 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com)
Nov 23 03:34:20 localhost podman[88244]: 2025-11-23 08:34:20.499042592 +0000 UTC m=+0.395973936 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step5, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team)
Nov 23 03:34:20 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:34:20 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:34:24 localhost podman[88384]: 2025-11-23 08:34:24.167461182 +0000 UTC m=+0.076119257 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=nova_migration_target, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:34:24 localhost podman[88384]: 2025-11-23 08:34:24.505167368 +0000 UTC m=+0.413825503 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:34:24 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:34:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:34:26 localhost systemd[1]: tmp-crun.WcCiub.mount: Deactivated successfully.
Nov 23 03:34:26 localhost podman[88407]: 2025-11-23 08:34:26.199073274 +0000 UTC m=+0.100934782 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4)
Nov 23 03:34:26 localhost systemd[1]: tmp-crun.DppHwl.mount: Deactivated successfully.
Nov 23 03:34:26 localhost podman[88409]: 2025-11-23 08:34:26.257105337 +0000 UTC m=+0.149784020 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Nov 23 03:34:26 localhost podman[88407]: 2025-11-23 08:34:26.27629094 +0000 UTC m=+0.178152408 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:34:26 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:34:26 localhost podman[88408]: 2025-11-23 08:34:26.342519142 +0000 UTC m=+0.241678878 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Nov 23 03:34:26 localhost podman[88408]: 2025-11-23 08:34:26.371823486 +0000 UTC m=+0.270983262 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:34:26 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:34:26 localhost podman[88409]: 2025-11-23 08:34:26.472186042 +0000 UTC m=+0.364864705 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:34:26 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:34:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:34:51 localhost podman[88555]: 2025-11-23 08:34:51.185520471 +0000 UTC m=+0.085949060 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=)
Nov 23 03:34:51 localhost podman[88570]: 2025-11-23 08:34:51.248629871 +0000 UTC m=+0.132530037 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 03:34:51 localhost podman[88556]: 2025-11-23 08:34:51.211716813 +0000 UTC m=+0.104826926 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64)
Nov 23 03:34:51 localhost podman[88557]: 2025-11-23 08:34:51.271458211 +0000 UTC m=+0.161487492 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public)
Nov 23 03:34:51 localhost podman[88570]: 2025-11-23 08:34:51.305013309 +0000 UTC m=+0.188913495 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 03:34:51 localhost podman[88557]: 2025-11-23 08:34:51.351327749 +0000 UTC m=+0.241357040 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:34:51 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:34:51 localhost podman[88563]: 2025-11-23 08:34:51.359681333 +0000 UTC m=+0.245969793 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git)
Nov 23 03:34:51 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:34:51 localhost podman[88563]: 2025-11-23 08:34:51.371246702 +0000 UTC m=+0.257535172 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, release=1761123044)
Nov 23 03:34:51 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:34:51 localhost podman[88555]: 2025-11-23 08:34:51.43399174 +0000 UTC m=+0.334420329 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:34:51 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:34:51 localhost podman[88556]: 2025-11-23 08:34:51.451608691 +0000 UTC m=+0.344718764 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044)
Nov 23 03:34:51 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:34:51 localhost podman[88564]: 2025-11-23 08:34:51.43735167 +0000 UTC m=+0.320075365 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044)
Nov 23 03:34:51 localhost podman[88564]: 2025-11-23 08:34:51.5199281 +0000 UTC m=+0.402651695 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, release=1761123044, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Nov 23 03:34:51 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:34:52 localhost systemd[1]: tmp-crun.LM8zlh.mount: Deactivated successfully.
Nov 23 03:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:34:55 localhost podman[88686]: 2025-11-23 08:34:55.182373979 +0000 UTC m=+0.084057720 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, container_name=nova_migration_target)
Nov 23 03:34:55 localhost podman[88686]: 2025-11-23 08:34:55.6271404 +0000 UTC m=+0.528824171 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, batch=17.1_20251118.1)
Nov 23 03:34:55 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:34:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:34:57 localhost podman[88709]: 2025-11-23 08:34:57.186810374 +0000 UTC m=+0.089311631 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044)
Nov 23 03:34:57 localhost podman[88709]: 2025-11-23 08:34:57.239986127 +0000 UTC m=+0.142487444 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true)
Nov 23 03:34:57 localhost systemd[1]: tmp-crun.zw5BsH.mount: Deactivated successfully.
Nov 23 03:34:57 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:34:57 localhost podman[88710]: 2025-11-23 08:34:57.265052348 +0000 UTC m=+0.163970059 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, container_name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Nov 23 03:34:57 localhost podman[88710]: 2025-11-23 08:34:57.298999016 +0000 UTC m=+0.197916757 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:34:57 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:34:57 localhost podman[88711]: 2025-11-23 08:34:57.31521425 +0000 UTC m=+0.209291191 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public)
Nov 23 03:34:57 localhost podman[88711]: 2025-11-23 08:34:57.5159162 +0000 UTC m=+0.409993171 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044)
Nov 23 03:34:57 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:35:16 localhost sshd[88807]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:35:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:35:22 localhost systemd[1]: tmp-crun.c9x9G3.mount: Deactivated successfully.
Nov 23 03:35:22 localhost podman[88810]: 2025-11-23 08:35:22.195388153 +0000 UTC m=+0.097770167 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team)
Nov 23 03:35:22 localhost podman[88814]: 2025-11-23 08:35:22.260899726 +0000 UTC m=+0.154055614 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Nov 23 03:35:22 localhost podman[88810]: 2025-11-23 08:35:22.28310047 +0000 UTC m=+0.185482524 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, vcs-type=git)
Nov 23 03:35:22 localhost podman[88814]: 2025-11-23 08:35:22.289941703 +0000 UTC m=+0.183097651 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:35:22 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:35:22 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:35:22 localhost podman[88811]: 2025-11-23 08:35:22.353519054 +0000 UTC m=+0.253386621 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 23 03:35:22 localhost podman[88812]: 2025-11-23 08:35:22.361381964 +0000 UTC m=+0.255436335 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:35:22 localhost podman[88812]: 2025-11-23 08:35:22.375953904 +0000 UTC m=+0.270008335 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:35:22 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:35:22 localhost podman[88809]: 2025-11-23 08:35:22.417389213 +0000 UTC m=+0.318447111 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public)
Nov 23 03:35:22 localhost podman[88813]: 2025-11-23 08:35:22.457697822 +0000 UTC m=+0.352545654 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:35:22 localhost podman[88809]: 2025-11-23 08:35:22.454981949 +0000 UTC m=+0.356039837 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 23 03:35:22 localhost podman[88813]: 2025-11-23 08:35:22.469920539 +0000 UTC m=+0.364768351 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:35:22 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:35:22 localhost podman[88811]: 2025-11-23 08:35:22.483185944 +0000 UTC m=+0.383053561 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Nov 23 03:35:22 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:35:22 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:35:23 localhost systemd[1]: tmp-crun.6bXUXA.mount: Deactivated successfully.
Nov 23 03:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:35:26 localhost systemd[1]: tmp-crun.aWkPY6.mount: Deactivated successfully.
Nov 23 03:35:26 localhost podman[88946]: 2025-11-23 08:35:26.198377735 +0000 UTC m=+0.099565906 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Nov 23 03:35:26 localhost podman[88946]: 2025-11-23 08:35:26.587910398 +0000 UTC m=+0.489098489 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:35:26 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:35:28 localhost systemd[1]: tmp-crun.0Q3Amj.mount: Deactivated successfully.
Nov 23 03:35:28 localhost podman[88971]: 2025-11-23 08:35:28.187969383 +0000 UTC m=+0.096027701 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, version=17.1.12, tcib_managed=true, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public)
Nov 23 03:35:28 localhost podman[88972]: 2025-11-23 08:35:28.233403168 +0000 UTC m=+0.137105210 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:35:28 localhost podman[88971]: 2025-11-23 08:35:28.263969386 +0000 UTC m=+0.172027654 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Nov 23 03:35:28 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:35:28 localhost podman[88972]: 2025-11-23 08:35:28.286079378 +0000 UTC m=+0.189781410 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, version=17.1.12)
Nov 23 03:35:28 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:35:28 localhost podman[88973]: 2025-11-23 08:35:28.343620248 +0000 UTC m=+0.243729313 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:46Z)
Nov 23 03:35:28 localhost podman[88973]: 2025-11-23 08:35:28.53990719 +0000 UTC m=+0.440016245 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:35:28 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:35:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:35:42 localhost recover_tripleo_nova_virtqemud[89066]: 61733
Nov 23 03:35:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:35:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:35:43 localhost podman[89152]: 2025-11-23 08:35:43.721689352 +0000 UTC m=+0.101326212 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.buildah.version=1.33.12, version=7, io.openshift.expose-services=, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Nov 23 03:35:43 localhost podman[89152]: 2025-11-23 08:35:43.85613584 +0000 UTC m=+0.235772660 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, version=7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main)
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:35:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:35:53 localhost podman[89295]: 2025-11-23 08:35:53.179790942 +0000 UTC m=+0.084719177 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public)
Nov 23 03:35:53 localhost systemd[1]: tmp-crun.9iH1YO.mount: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89295]: 2025-11-23 08:35:53.248500251 +0000 UTC m=+0.153428566 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1)
Nov 23 03:35:53 localhost podman[89298]: 2025-11-23 08:35:53.208029538 +0000 UTC m=+0.103067858 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=)
Nov 23 03:35:53 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89297]: 2025-11-23 08:35:53.23089099 +0000 UTC m=+0.130336929 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:35:53 localhost podman[89298]: 2025-11-23 08:35:53.287763102 +0000 UTC m=+0.182801452 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:35:53 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89296]: 2025-11-23 08:35:53.251987414 +0000 UTC m=+0.147962550 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:35:53 localhost podman[89296]: 2025-11-23 08:35:53.335230732 +0000 UTC m=+0.231205878 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:35:53 localhost podman[89293]: 2025-11-23 08:35:53.345357732 +0000 UTC m=+0.249713962 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Nov 23 03:35:53 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89293]: 2025-11-23 08:35:53.384058768 +0000 UTC m=+0.288415008 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:35:53 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89294]: 2025-11-23 08:35:53.397503338 +0000 UTC m=+0.302511335 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_compute, batch=17.1_20251118.1)
Nov 23 03:35:53 localhost podman[89297]: 2025-11-23 08:35:53.415420297 +0000 UTC m=+0.314866276 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, container_name=iscsid, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044)
Nov 23 03:35:53 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:35:53 localhost podman[89294]: 2025-11-23 08:35:53.447460995 +0000 UTC m=+0.352468982 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, container_name=nova_compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4)
Nov 23 03:35:53 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:35:57 localhost podman[89430]: 2025-11-23 08:35:57.178065631 +0000 UTC m=+0.083944507 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_migration_target)
Nov 23 03:35:57 localhost podman[89430]: 2025-11-23 08:35:57.571166679 +0000 UTC m=+0.477045565 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute)
Nov 23 03:35:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:35:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:35:59 localhost systemd[1]: tmp-crun.YwscJf.mount: Deactivated successfully.
Nov 23 03:35:59 localhost podman[89455]: 2025-11-23 08:35:59.180028759 +0000 UTC m=+0.082375225 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4)
Nov 23 03:35:59 localhost podman[89455]: 2025-11-23 08:35:59.198253237 +0000 UTC m=+0.100599753 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:35:59 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:35:59 localhost podman[89454]: 2025-11-23 08:35:59.296020163 +0000 UTC m=+0.203381923 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:35:59 localhost podman[89456]: 2025-11-23 08:35:59.25367457 +0000 UTC m=+0.148784863 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 23 03:35:59 localhost podman[89454]: 2025-11-23 08:35:59.34001877 +0000 UTC m=+0.247380530 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:35:59 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:35:59 localhost podman[89456]: 2025-11-23 08:35:59.487937958 +0000 UTC m=+0.383048231 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4)
Nov 23 03:35:59 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:36:00 localhost systemd[1]: tmp-crun.rVLauv.mount: Deactivated successfully.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:36:24 localhost podman[89557]: 2025-11-23 08:36:24.19316865 +0000 UTC m=+0.086600779 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:36:24 localhost podman[89557]: 2025-11-23 08:36:24.20588438 +0000 UTC m=+0.099316529 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, tcib_managed=true)
Nov 23 03:36:24 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:36:24 localhost podman[89555]: 2025-11-23 08:36:24.242484149 +0000 UTC m=+0.137546391 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step5, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute)
Nov 23 03:36:24 localhost podman[89555]: 2025-11-23 08:36:24.298055727 +0000 UTC m=+0.193118009 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:36:24 localhost podman[89554]: 2025-11-23 08:36:24.310853989 +0000 UTC m=+0.210233227 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:36:24 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:36:24 localhost podman[89554]: 2025-11-23 08:36:24.347887149 +0000 UTC m=+0.247266317 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Nov 23 03:36:24 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:36:24 localhost podman[89564]: 2025-11-23 08:36:24.360833646 +0000 UTC m=+0.242628744 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:36:24 localhost podman[89556]: 2025-11-23 08:36:24.347576031 +0000 UTC m=+0.241576605 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:36:24 localhost podman[89564]: 2025-11-23 08:36:24.41217682 +0000 UTC m=+0.293971938 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:36:24 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:36:24 localhost podman[89556]: 2025-11-23 08:36:24.432032381 +0000 UTC m=+0.326032975 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:36:24 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:36:24 localhost podman[89563]: 2025-11-23 08:36:24.400340373 +0000 UTC m=+0.286471046 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:36:24 localhost podman[89563]: 2025-11-23 08:36:24.486254172 +0000 UTC m=+0.372384895 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:36:24 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:36:25 localhost systemd[1]: tmp-crun.tKJbro.mount: Deactivated successfully.
Nov 23 03:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:36:28 localhost podman[89691]: 2025-11-23 08:36:28.184853909 +0000 UTC m=+0.083901036 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:36:28 localhost podman[89691]: 2025-11-23 08:36:28.532600914 +0000 UTC m=+0.431648051 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64)
Nov 23 03:36:28 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:36:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:36:30 localhost podman[89716]: 2025-11-23 08:36:30.195050099 +0000 UTC m=+0.094093959 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 03:36:30 localhost systemd[1]: tmp-crun.Qf8jEK.mount: Deactivated successfully.
Nov 23 03:36:30 localhost podman[89716]: 2025-11-23 08:36:30.248070247 +0000 UTC m=+0.147114117 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:36:30 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:36:30 localhost podman[89717]: 2025-11-23 08:36:30.298030714 +0000 UTC m=+0.193724795 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:36:30 localhost podman[89715]: 2025-11-23 08:36:30.251206922 +0000 UTC m=+0.153071628 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc.)
Nov 23 03:36:30 localhost podman[89715]: 2025-11-23 08:36:30.334224043 +0000 UTC m=+0.236088699 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:36:30 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:36:30 localhost podman[89717]: 2025-11-23 08:36:30.545099685 +0000 UTC m=+0.440793736 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Nov 23 03:36:30 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:36:33 localhost sshd[89793]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:36:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:36:54 localhost recover_tripleo_nova_virtqemud[89873]: 61733
Nov 23 03:36:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:36:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:36:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:36:55 localhost systemd[1]: tmp-crun.f5atXQ.mount: Deactivated successfully.
Nov 23 03:36:55 localhost podman[89877]: 2025-11-23 08:36:55.249363611 +0000 UTC m=+0.134354026 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:36:55 localhost podman[89896]: 2025-11-23 08:36:55.27287664 +0000 UTC m=+0.148688670 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=)
Nov 23 03:36:55 localhost podman[89874]: 2025-11-23 08:36:55.318222714 +0000 UTC m=+0.210773581 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z)
Nov 23 03:36:55 localhost podman[89874]: 2025-11-23 08:36:55.329689161 +0000 UTC m=+0.222240018 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:36:55 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:36:55 localhost podman[89875]: 2025-11-23 08:36:55.369599619 +0000 UTC m=+0.261480728 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:36:55 localhost podman[89877]: 2025-11-23 08:36:55.382584746 +0000 UTC m=+0.267575201 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:36:55 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:36:55 localhost podman[89896]: 2025-11-23 08:36:55.402184471 +0000 UTC m=+0.277996531 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:11:48Z, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:36:55 localhost podman[89876]: 2025-11-23 08:36:55.37079074 +0000 UTC m=+0.261530519 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Nov 23 03:36:55 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:36:55 localhost podman[89875]: 2025-11-23 08:36:55.43730226 +0000 UTC m=+0.329183349 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:36:55 localhost podman[89878]: 2025-11-23 08:36:55.435811951 +0000 UTC m=+0.316044258 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3)
Nov 23 03:36:55 localhost podman[89876]: 2025-11-23 08:36:55.454066748 +0000 UTC m=+0.344806517 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1)
Nov 23 03:36:55 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:36:55 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:36:55 localhost podman[89878]: 2025-11-23 08:36:55.519951452 +0000 UTC m=+0.400183749 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container)
Nov 23 03:36:55 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:36:59 localhost podman[90009]: 2025-11-23 08:36:59.180897661 +0000 UTC m=+0.085253942 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:36:59 localhost podman[90009]: 2025-11-23 08:36:59.56019396 +0000 UTC m=+0.464550161 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, container_name=nova_migration_target, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public)
Nov 23 03:36:59 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:37:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:37:01 localhost systemd[1]: tmp-crun.GjC5Pq.mount: Deactivated successfully.
Nov 23 03:37:01 localhost podman[90032]: 2025-11-23 08:37:01.198069607 +0000 UTC m=+0.101102008 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=)
Nov 23 03:37:01 localhost podman[90032]: 2025-11-23 08:37:01.280193564 +0000 UTC m=+0.183225925 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team)
Nov 23 03:37:01 localhost podman[90033]: 2025-11-23 08:37:01.290126189 +0000 UTC m=+0.188713420 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:37:01 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:37:01 localhost podman[90034]: 2025-11-23 08:37:01.253617752 +0000 UTC m=+0.148776682 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:37:01 localhost podman[90033]: 2025-11-23 08:37:01.319872175 +0000 UTC m=+0.218459466 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:37:01 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:37:01 localhost podman[90034]: 2025-11-23 08:37:01.430761393 +0000 UTC m=+0.325920343 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12)
Nov 23 03:37:01 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:37:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:37:26 localhost systemd[1]: tmp-crun.bBVTT1.mount: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90107]: 2025-11-23 08:37:26.229787992 +0000 UTC m=+0.130420451 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12)
Nov 23 03:37:26 localhost podman[90105]: 2025-11-23 08:37:26.232019572 +0000 UTC m=+0.134550312 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:37:26 localhost podman[90108]: 2025-11-23 08:37:26.19087388 +0000 UTC m=+0.084896282 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, container_name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:37:26 localhost podman[90105]: 2025-11-23 08:37:26.265044815 +0000 UTC m=+0.167575545 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container)
Nov 23 03:37:26 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90106]: 2025-11-23 08:37:26.245735349 +0000 UTC m=+0.145508145 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:37:26 localhost podman[90114]: 2025-11-23 08:37:26.308314493 +0000 UTC m=+0.197118495 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:37:26 localhost podman[90106]: 2025-11-23 08:37:26.325677147 +0000 UTC m=+0.225449963 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.)
Nov 23 03:37:26 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90114]: 2025-11-23 08:37:26.342665002 +0000 UTC m=+0.231468974 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Nov 23 03:37:26 localhost podman[90107]: 2025-11-23 08:37:26.361186928 +0000 UTC m=+0.261819387 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 03:37:26 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90108]: 2025-11-23 08:37:26.377109114 +0000 UTC m=+0.271131586 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, distribution-scope=public, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:37:26 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90124]: 2025-11-23 08:37:26.34370885 +0000 UTC m=+0.232973254 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 03:37:26 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:37:26 localhost podman[90124]: 2025-11-23 08:37:26.422757795 +0000 UTC m=+0.312022139 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z)
Nov 23 03:37:26 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:37:30 localhost systemd[1]: tmp-crun.daLd1m.mount: Deactivated successfully.
Nov 23 03:37:30 localhost podman[90243]: 2025-11-23 08:37:30.195019134 +0000 UTC m=+0.099298749 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:37:30 localhost podman[90243]: 2025-11-23 08:37:30.589331944 +0000 UTC m=+0.493611569 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_migration_target, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:37:30 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:37:32 localhost podman[90267]: 2025-11-23 08:37:32.175213969 +0000 UTC m=+0.080590147 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:37:32 localhost podman[90267]: 2025-11-23 08:37:32.220978304 +0000 UTC m=+0.126354482 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:14:25Z, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:37:32 localhost systemd[1]: tmp-crun.1x0Qsy.mount: Deactivated successfully.
Nov 23 03:37:32 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:37:32 localhost podman[90268]: 2025-11-23 08:37:32.249938838 +0000 UTC m=+0.151550076 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z)
Nov 23 03:37:32 localhost podman[90269]: 2025-11-23 08:37:32.292176209 +0000 UTC m=+0.191836564 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:37:32 localhost podman[90268]: 2025-11-23 08:37:32.301137918 +0000 UTC m=+0.202749106 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Nov 23 03:37:32 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:37:32 localhost podman[90269]: 2025-11-23 08:37:32.501004137 +0000 UTC m=+0.400664542 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:37:32 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:37:49 localhost sshd[90421]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:37:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:37:57 localhost systemd[1]: tmp-crun.T0ZHU9.mount: Deactivated successfully.
Nov 23 03:37:57 localhost podman[90431]: 2025-11-23 08:37:57.214750847 +0000 UTC m=+0.102878094 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:37:57 localhost podman[90438]: 2025-11-23 08:37:57.223802339 +0000 UTC m=+0.103497730 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:37:57 localhost podman[90424]: 2025-11-23 08:37:57.266887932 +0000 UTC m=+0.162390886 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:37:57 localhost podman[90431]: 2025-11-23 08:37:57.279178361 +0000 UTC m=+0.167305598 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Nov 23 03:37:57 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:37:57 localhost podman[90424]: 2025-11-23 08:37:57.294889591 +0000 UTC m=+0.190392635 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:37:57 localhost podman[90438]: 2025-11-23 08:37:57.307093598 +0000 UTC m=+0.186789029 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z)
Nov 23 03:37:57 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:37:57 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:37:57 localhost podman[90423]: 2025-11-23 08:37:57.356800308 +0000 UTC m=+0.252372984 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Nov 23 03:37:57 localhost podman[90423]: 2025-11-23 08:37:57.394897237 +0000 UTC m=+0.290469853 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12)
Nov 23 03:37:57 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:37:57 localhost podman[90425]: 2025-11-23 08:37:57.413973908 +0000 UTC m=+0.305756602 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1)
Nov 23 03:37:57 localhost podman[90426]: 2025-11-23 08:37:57.462287911 +0000 UTC m=+0.349793101 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:37:57 localhost podman[90425]: 2025-11-23 08:37:57.466032111 +0000 UTC m=+0.357814765 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Nov 23 03:37:57 localhost podman[90426]: 2025-11-23 08:37:57.474017784 +0000 UTC m=+0.361523014 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:37:57 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:37:57 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:37:58 localhost systemd[1]: tmp-crun.rF3SHH.mount: Deactivated successfully.
Nov 23 03:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:38:01 localhost podman[90563]: 2025-11-23 08:38:01.178596831 +0000 UTC m=+0.088041907 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, container_name=nova_migration_target, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:38:01 localhost podman[90563]: 2025-11-23 08:38:01.561362282 +0000 UTC m=+0.470807318 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, version=17.1.12, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:38:01 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:38:03 localhost podman[90587]: 2025-11-23 08:38:03.183933299 +0000 UTC m=+0.083106454 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git)
Nov 23 03:38:03 localhost podman[90587]: 2025-11-23 08:38:03.226031146 +0000 UTC m=+0.125204291 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=)
Nov 23 03:38:03 localhost podman[90586]: 2025-11-23 08:38:03.238652664 +0000 UTC m=+0.139762201 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:38:03 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:38:03 localhost podman[90588]: 2025-11-23 08:38:03.299953784 +0000 UTC m=+0.192277266 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z)
Nov 23 03:38:03 localhost podman[90586]: 2025-11-23 08:38:03.30316515 +0000 UTC m=+0.204274707 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:38:03 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:38:03 localhost podman[90588]: 2025-11-23 08:38:03.532016394 +0000 UTC m=+0.424339916 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:38:03 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:38:04 localhost systemd[1]: tmp-crun.LaVydX.mount: Deactivated successfully.
Nov 23 03:38:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:38:24 localhost recover_tripleo_nova_virtqemud[90664]: 61733
Nov 23 03:38:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:38:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:38:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:38:28 localhost systemd[1]: tmp-crun.KpgYTE.mount: Deactivated successfully.
Nov 23 03:38:28 localhost podman[90665]: 2025-11-23 08:38:28.26620478 +0000 UTC m=+0.162181231 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, release=1761123044, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:38:28 localhost podman[90667]: 2025-11-23 08:38:28.315390966 +0000 UTC m=+0.204644618 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z)
Nov 23 03:38:28 localhost podman[90681]: 2025-11-23 08:38:28.3719711 +0000 UTC m=+0.251657065 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Nov 23 03:38:28 localhost podman[90665]: 2025-11-23 08:38:28.380370475 +0000 UTC m=+0.276346936 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:38:28 localhost podman[90667]: 2025-11-23 08:38:28.39440954 +0000 UTC m=+0.283663172 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4)
Nov 23 03:38:28 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:38:28 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:38:28 localhost podman[90666]: 2025-11-23 08:38:28.229165309 +0000 UTC m=+0.121447772 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Nov 23 03:38:28 localhost podman[90666]: 2025-11-23 08:38:28.467987309 +0000 UTC m=+0.360269802 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step5, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Nov 23 03:38:28 localhost podman[90674]: 2025-11-23 08:38:28.475292635 +0000 UTC m=+0.357737124 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:38:28 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:38:28 localhost podman[90681]: 2025-11-23 08:38:28.487978584 +0000 UTC m=+0.367664589 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:38:28 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:38:28 localhost podman[90674]: 2025-11-23 08:38:28.514848523 +0000 UTC m=+0.397293012 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.component=openstack-iscsid-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:38:28 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:38:28 localhost podman[90668]: 2025-11-23 08:38:28.578820875 +0000 UTC m=+0.465024754 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3)
Nov 23 03:38:28 localhost podman[90668]: 2025-11-23 08:38:28.588811602 +0000 UTC m=+0.475015511 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:38:28 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:38:32 localhost podman[90804]: 2025-11-23 08:38:32.18321711 +0000 UTC m=+0.087039820 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Nov 23 03:38:32 localhost podman[90804]: 2025-11-23 08:38:32.55733047 +0000 UTC m=+0.461153150 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:38:32 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:38:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:38:34 localhost systemd[1]: tmp-crun.5e7y80.mount: Deactivated successfully.
Nov 23 03:38:34 localhost podman[90827]: 2025-11-23 08:38:34.189888074 +0000 UTC m=+0.087272497 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:38:34 localhost podman[90829]: 2025-11-23 08:38:34.242704137 +0000 UTC m=+0.131995513 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:38:34 localhost podman[90828]: 2025-11-23 08:38:34.311690913 +0000 UTC m=+0.205637663 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:38:34 localhost podman[90827]: 2025-11-23 08:38:34.322900773 +0000 UTC m=+0.220285236 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:38:34 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:38:34 localhost podman[90828]: 2025-11-23 08:38:34.367950558 +0000 UTC m=+0.261897298 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:38:34 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:38:34 localhost podman[90829]: 2025-11-23 08:38:34.474953741 +0000 UTC m=+0.364245147 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team)
Nov 23 03:38:34 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:38:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:38:59 localhost systemd[1]: tmp-crun.wPW6kC.mount: Deactivated successfully.
Nov 23 03:38:59 localhost podman[90980]: 2025-11-23 08:38:59.191873664 +0000 UTC m=+0.093660076 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:38:59 localhost podman[90988]: 2025-11-23 08:38:59.208092529 +0000 UTC m=+0.093254826 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, distribution-scope=public, container_name=collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:38:59 localhost podman[90980]: 2025-11-23 08:38:59.23429285 +0000 UTC m=+0.136079292 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:38:59 localhost podman[90988]: 2025-11-23 08:38:59.24513824 +0000 UTC m=+0.130300537 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true, container_name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container)
Nov 23 03:38:59 localhost systemd[1]: tmp-crun.hYvFz3.mount: Deactivated successfully.
Nov 23 03:38:59 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:38:59 localhost podman[90987]: 2025-11-23 08:38:59.249657071 +0000 UTC m=+0.137795218 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:38:59 localhost podman[90987]: 2025-11-23 08:38:59.273782637 +0000 UTC m=+0.161920794 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git)
Nov 23 03:38:59 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:38:59 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:38:59 localhost podman[90995]: 2025-11-23 08:38:59.312476182 +0000 UTC m=+0.192191124 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Nov 23 03:38:59 localhost podman[91005]: 2025-11-23 08:38:59.38115961 +0000 UTC m=+0.256711331 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:38:59 localhost podman[90981]: 2025-11-23 08:38:59.437116057 +0000 UTC m=+0.330988587 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:38:59 localhost podman[90995]: 2025-11-23 08:38:59.454962854 +0000 UTC m=+0.334677836 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Nov 23 03:38:59 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:38:59 localhost podman[91005]: 2025-11-23 08:38:59.493880426 +0000 UTC m=+0.369432117 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true)
Nov 23 03:38:59 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:38:59 localhost podman[90981]: 2025-11-23 08:38:59.551386405 +0000 UTC m=+0.445258965 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:38:59 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:39:03 localhost podman[91116]: 2025-11-23 08:39:03.193922351 +0000 UTC m=+0.090520053 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4)
Nov 23 03:39:03 localhost podman[91116]: 2025-11-23 08:39:03.613017606 +0000 UTC m=+0.509615288 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044)
Nov 23 03:39:03 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:39:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:39:05 localhost systemd[1]: tmp-crun.3lKNQW.mount: Deactivated successfully.
Nov 23 03:39:05 localhost podman[91140]: 2025-11-23 08:39:05.262355089 +0000 UTC m=+0.162406117 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:39:05 localhost podman[91141]: 2025-11-23 08:39:05.308165095 +0000 UTC m=+0.203352523 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:39:05 localhost podman[91140]: 2025-11-23 08:39:05.344410185 +0000 UTC m=+0.244461173 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.expose-services=)
Nov 23 03:39:05 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:39:05 localhost podman[91139]: 2025-11-23 08:39:05.217622532 +0000 UTC m=+0.119500858 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, release=1761123044, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:39:05 localhost podman[91139]: 2025-11-23 08:39:05.402149349 +0000 UTC m=+0.304027615 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4)
Nov 23 03:39:05 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:39:05 localhost podman[91141]: 2025-11-23 08:39:05.550207571 +0000 UTC m=+0.445395009 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1)
Nov 23 03:39:05 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:39:09 localhost sshd[91214]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:39:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:39:30 localhost systemd[1]: tmp-crun.rTfw3P.mount: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91218]: 2025-11-23 08:39:30.206338488 +0000 UTC m=+0.083709651 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Nov 23 03:39:30 localhost podman[91220]: 2025-11-23 08:39:30.212774 +0000 UTC m=+0.080009282 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 03:39:30 localhost podman[91220]: 2025-11-23 08:39:30.220418415 +0000 UTC m=+0.087653697 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:39:30 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91219]: 2025-11-23 08:39:30.262037808 +0000 UTC m=+0.134621753 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, container_name=collectd, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.12)
Nov 23 03:39:30 localhost podman[91219]: 2025-11-23 08:39:30.269489527 +0000 UTC m=+0.142073462 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12)
Nov 23 03:39:30 localhost podman[91218]: 2025-11-23 08:39:30.283571274 +0000 UTC m=+0.160942457 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:12:45Z, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:39:30 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91217]: 2025-11-23 08:39:30.318655263 +0000 UTC m=+0.197418984 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:39:30 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91217]: 2025-11-23 08:39:30.351604774 +0000 UTC m=+0.230368475 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:39:30 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91216]: 2025-11-23 08:39:30.427823605 +0000 UTC m=+0.305144067 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Nov 23 03:39:30 localhost podman[91216]: 2025-11-23 08:39:30.465065621 +0000 UTC m=+0.342386043 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:39:30 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:39:30 localhost podman[91236]: 2025-11-23 08:39:30.480910665 +0000 UTC m=+0.342403844 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:39:30 localhost podman[91236]: 2025-11-23 08:39:30.542993046 +0000 UTC m=+0.404486195 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, tcib_managed=true)
Nov 23 03:39:30 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:39:34 localhost podman[91350]: 2025-11-23 08:39:34.180868718 +0000 UTC m=+0.086352742 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team)
Nov 23 03:39:34 localhost podman[91350]: 2025-11-23 08:39:34.551417814 +0000 UTC m=+0.456901958 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4)
Nov 23 03:39:34 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:39:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:39:36 localhost systemd[1]: tmp-crun.EfXm8j.mount: Deactivated successfully.
Nov 23 03:39:36 localhost podman[91373]: 2025-11-23 08:39:36.201721553 +0000 UTC m=+0.095304752 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:39:36 localhost podman[91373]: 2025-11-23 08:39:36.254132814 +0000 UTC m=+0.147715973 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:39:36 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:39:36 localhost podman[91374]: 2025-11-23 08:39:36.295715757 +0000 UTC m=+0.185418382 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true)
Nov 23 03:39:36 localhost podman[91375]: 2025-11-23 08:39:36.261789139 +0000 UTC m=+0.148755041 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd)
Nov 23 03:39:36 localhost podman[91374]: 2025-11-23 08:39:36.322843323 +0000 UTC m=+0.212545898 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Nov 23 03:39:36 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Deactivated successfully.
Nov 23 03:39:36 localhost podman[91375]: 2025-11-23 08:39:36.46437332 +0000 UTC m=+0.351339132 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team)
Nov 23 03:39:36 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:40:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:40:01 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:40:01 localhost recover_tripleo_nova_virtqemud[91561]: 61733
Nov 23 03:40:01 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:40:01 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:40:01 localhost podman[91525]: 2025-11-23 08:40:01.200024436 +0000 UTC m=+0.103181132 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com, version=17.1.12, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:40:01 localhost podman[91527]: 2025-11-23 08:40:01.242205465 +0000 UTC m=+0.140929442 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, release=1761123044, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:40:01 localhost podman[91525]: 2025-11-23 08:40:01.263922696 +0000 UTC m=+0.167079352 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:40:01 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:40:01 localhost podman[91524]: 2025-11-23 08:40:01.303457794 +0000 UTC m=+0.206646821 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:40:01 localhost podman[91527]: 2025-11-23 08:40:01.331301479 +0000 UTC m=+0.230025476 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044)
Nov 23 03:40:01 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:40:01 localhost podman[91524]: 2025-11-23 08:40:01.342080538 +0000 UTC m=+0.245269645 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Nov 23 03:40:01 localhost podman[91528]: 2025-11-23 08:40:01.349841826 +0000 UTC m=+0.244252667 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Nov 23 03:40:01 localhost podman[91528]: 2025-11-23 08:40:01.358252331 +0000 UTC m=+0.252663152 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:40:01 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:40:01 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:40:01 localhost podman[91529]: 2025-11-23 08:40:01.396082142 +0000 UTC m=+0.291423799 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:40:01 localhost podman[91526]: 2025-11-23 08:40:01.213307432 +0000 UTC m=+0.111321080 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Nov 23 03:40:01 localhost podman[91529]: 2025-11-23 08:40:01.436076332 +0000 UTC m=+0.331417989 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, version=17.1.12)
Nov 23 03:40:01 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:40:01 localhost podman[91526]: 2025-11-23 08:40:01.450812247 +0000 UTC m=+0.348825905 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:40:01 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:40:05 localhost podman[91660]: 2025-11-23 08:40:05.17879722 +0000 UTC m=+0.085102168 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, container_name=nova_migration_target, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:40:05 localhost sshd[91683]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:40:05 localhost podman[91660]: 2025-11-23 08:40:05.548118342 +0000 UTC m=+0.454423330 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1)
Nov 23 03:40:05 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:40:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:40:07 localhost podman[91687]: 2025-11-23 08:40:07.187390966 +0000 UTC m=+0.087571005 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public)
Nov 23 03:40:07 localhost podman[91687]: 2025-11-23 08:40:07.218899129 +0000 UTC m=+0.119079148 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true)
Nov 23 03:40:07 localhost podman[91687]: unhealthy
Nov 23 03:40:07 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:40:07 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:40:07 localhost podman[91686]: 2025-11-23 08:40:07.238797442 +0000 UTC m=+0.141762415 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:40:07 localhost podman[91688]: 2025-11-23 08:40:07.300463712 +0000 UTC m=+0.197125596 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:40:07 localhost podman[91686]: 2025-11-23 08:40:07.322208484 +0000 UTC m=+0.225173507 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:40:07 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:40:07 localhost podman[91688]: 2025-11-23 08:40:07.49810812 +0000 UTC m=+0.394770054 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:40:07 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:40:29 localhost sshd[91767]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:40:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:40:32 localhost podman[91769]: 2025-11-23 08:40:32.201079323 +0000 UTC m=+0.098188909 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, release=1761123044, distribution-scope=public, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:40:32 localhost podman[91769]: 2025-11-23 08:40:32.240069295 +0000 UTC m=+0.137178871 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:40:32 localhost systemd[1]: tmp-crun.VXDA1C.mount: Deactivated successfully.
Nov 23 03:40:32 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:40:32 localhost podman[91771]: 2025-11-23 08:40:32.26377764 +0000 UTC m=+0.153974421 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Nov 23 03:40:32 localhost podman[91789]: 2025-11-23 08:40:32.314921488 +0000 UTC m=+0.191046353 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 03:40:32 localhost podman[91777]: 2025-11-23 08:40:32.366845758 +0000 UTC m=+0.248267374 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3)
Nov 23 03:40:32 localhost podman[91789]: 2025-11-23 08:40:32.373088305 +0000 UTC m=+0.249213150 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:40:32 localhost podman[91777]: 2025-11-23 08:40:32.380927215 +0000 UTC m=+0.262348801 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=collectd, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Nov 23 03:40:32 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:40:32 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:40:32 localhost podman[91779]: 2025-11-23 08:40:32.431915819 +0000 UTC m=+0.309619466 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:40:32 localhost podman[91770]: 2025-11-23 08:40:32.472645578 +0000 UTC m=+0.363848806 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Nov 23 03:40:32 localhost podman[91771]: 2025-11-23 08:40:32.496525108 +0000 UTC m=+0.386721829 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi)
Nov 23 03:40:32 localhost podman[91770]: 2025-11-23 08:40:32.508088137 +0000 UTC m=+0.399291345 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:40:32 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:40:32 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:40:32 localhost podman[91779]: 2025-11-23 08:40:32.547023709 +0000 UTC m=+0.424727406 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:40:32 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:40:36 localhost podman[91909]: 2025-11-23 08:40:36.194413975 +0000 UTC m=+0.097695035 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Nov 23 03:40:36 localhost podman[91909]: 2025-11-23 08:40:36.577001163 +0000 UTC m=+0.480282233 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com)
Nov 23 03:40:36 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:40:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:40:38 localhost podman[91934]: 2025-11-23 08:40:38.173458539 +0000 UTC m=+0.076027265 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.)
Nov 23 03:40:38 localhost podman[91932]: 2025-11-23 08:40:38.234278567 +0000 UTC m=+0.142002250 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:40:38 localhost podman[91933]: 2025-11-23 08:40:38.294687254 +0000 UTC m=+0.197821825 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4)
Nov 23 03:40:38 localhost podman[91932]: 2025-11-23 08:40:38.323205517 +0000 UTC m=+0.230929200 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:40:38 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:40:38 localhost podman[91933]: 2025-11-23 08:40:38.342242986 +0000 UTC m=+0.245377547 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git)
Nov 23 03:40:38 localhost podman[91933]: unhealthy
Nov 23 03:40:38 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:40:38 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:40:38 localhost podman[91934]: 2025-11-23 08:40:38.396388095 +0000 UTC m=+0.298956791 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12)
Nov 23 03:40:38 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:40:39 localhost systemd[1]: tmp-crun.7duXc3.mount: Deactivated successfully.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:41:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:41:03 localhost systemd[1]: tmp-crun.O5MJfP.mount: Deactivated successfully.
Nov 23 03:41:03 localhost podman[92138]: 2025-11-23 08:41:03.256823678 +0000 UTC m=+0.147324672 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1)
Nov 23 03:41:03 localhost podman[92137]: 2025-11-23 08:41:03.311609565 +0000 UTC m=+0.205372656 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, container_name=collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Nov 23 03:41:03 localhost podman[92138]: 2025-11-23 08:41:03.320322348 +0000 UTC m=+0.210823352 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, tcib_managed=true, name=rhosp17/openstack-iscsid, container_name=iscsid, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12)
Nov 23 03:41:03 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:41:03 localhost podman[92146]: 2025-11-23 08:41:03.230683599 +0000 UTC m=+0.116078976 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:41:03 localhost podman[92146]: 2025-11-23 08:41:03.363992117 +0000 UTC m=+0.249387514 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044)
Nov 23 03:41:03 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:41:03 localhost podman[92134]: 2025-11-23 08:41:03.41532938 +0000 UTC m=+0.314358102 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:41:03 localhost podman[92137]: 2025-11-23 08:41:03.424035973 +0000 UTC m=+0.317799034 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:41:03 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:41:03 localhost podman[92136]: 2025-11-23 08:41:03.512167942 +0000 UTC m=+0.410237579 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true)
Nov 23 03:41:03 localhost podman[92134]: 2025-11-23 08:41:03.525284762 +0000 UTC m=+0.424313524 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Nov 23 03:41:03 localhost podman[92135]: 2025-11-23 08:41:03.550040335 +0000 UTC m=+0.448976625 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Nov 23 03:41:03 localhost podman[92136]: 2025-11-23 08:41:03.56668826 +0000 UTC m=+0.464757917 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:41:03 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:41:03 localhost podman[92135]: 2025-11-23 08:41:03.585051752 +0000 UTC m=+0.483988002 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:41:03 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:41:03 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:41:04 localhost systemd[1]: tmp-crun.R8AgkQ.mount: Deactivated successfully.
Nov 23 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:41:07 localhost podman[92273]: 2025-11-23 08:41:07.194510973 +0000 UTC m=+0.093794650 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, vcs-type=git)
Nov 23 03:41:07 localhost podman[92273]: 2025-11-23 08:41:07.587660293 +0000 UTC m=+0.486943970 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:41:07 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:41:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:41:09 localhost podman[92297]: 2025-11-23 08:41:09.18183414 +0000 UTC m=+0.089987049 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:41:09 localhost podman[92299]: 2025-11-23 08:41:09.259095518 +0000 UTC m=+0.158145502 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=metrics_qdr, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:41:09 localhost podman[92297]: 2025-11-23 08:41:09.264767689 +0000 UTC m=+0.172920668 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:41:09 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Deactivated successfully.
Nov 23 03:41:09 localhost podman[92298]: 2025-11-23 08:41:09.229612378 +0000 UTC m=+0.134031437 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, release=1761123044, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:41:09 localhost podman[92298]: 2025-11-23 08:41:09.308950362 +0000 UTC m=+0.213369390 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Nov 23 03:41:09 localhost podman[92298]: unhealthy
Nov 23 03:41:09 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:41:09 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:41:09 localhost podman[92299]: 2025-11-23 08:41:09.451760323 +0000 UTC m=+0.350810257 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible)
Nov 23 03:41:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:41:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:41:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:41:34 localhost recover_tripleo_nova_virtqemud[92417]: 61733
Nov 23 03:41:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:41:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:41:34 localhost podman[92376]: 2025-11-23 08:41:34.206772194 +0000 UTC m=+0.097318055 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:41:34 localhost podman[92376]: 2025-11-23 08:41:34.252947109 +0000 UTC m=+0.143492950 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container)
Nov 23 03:41:34 localhost systemd[1]: tmp-crun.d1J7Ai.mount: Deactivated successfully.
Nov 23 03:41:34 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:41:34 localhost podman[92377]: 2025-11-23 08:41:34.30866492 +0000 UTC m=+0.194600179 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Nov 23 03:41:34 localhost podman[92377]: 2025-11-23 08:41:34.33780984 +0000 UTC m=+0.223745139 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:12:45Z, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi)
Nov 23 03:41:34 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:41:34 localhost podman[92375]: 2025-11-23 08:41:34.259600137 +0000 UTC m=+0.156989452 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:41:34 localhost podman[92383]: 2025-11-23 08:41:34.427873439 +0000 UTC m=+0.311043644 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step3, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z)
Nov 23 03:41:34 localhost podman[92395]: 2025-11-23 08:41:34.479916562 +0000 UTC m=+0.353225552 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:41:34 localhost podman[92395]: 2025-11-23 08:41:34.514715503 +0000 UTC m=+0.388024503 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:41:34 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:41:34 localhost podman[92388]: 2025-11-23 08:41:34.533026213 +0000 UTC m=+0.405648915 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:41:34 localhost podman[92383]: 2025-11-23 08:41:34.544145361 +0000 UTC m=+0.427315546 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Nov 23 03:41:34 localhost podman[92375]: 2025-11-23 08:41:34.54673351 +0000 UTC m=+0.444122775 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:41:34 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:41:34 localhost podman[92388]: 2025-11-23 08:41:34.574083652 +0000 UTC m=+0.446706384 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:41:34 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:41:34 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:41:38 localhost podman[92513]: 2025-11-23 08:41:38.186278575 +0000 UTC m=+0.090439460 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Nov 23 03:41:38 localhost podman[92513]: 2025-11-23 08:41:38.558020123 +0000 UTC m=+0.462180998 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:41:38 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:41:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:41:40 localhost systemd[1]: tmp-crun.cHCP5d.mount: Deactivated successfully.
Nov 23 03:41:40 localhost podman[92536]: 2025-11-23 08:41:40.214494716 +0000 UTC m=+0.111790672 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Nov 23 03:41:40 localhost podman[92536]: 2025-11-23 08:41:40.240940395 +0000 UTC m=+0.138236341 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:41:40 localhost podman[92536]: unhealthy
Nov 23 03:41:40 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:41:40 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:41:40 localhost podman[92537]: 2025-11-23 08:41:40.294389085 +0000 UTC m=+0.185003292 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Nov 23 03:41:40 localhost podman[92537]: 2025-11-23 08:41:40.31440882 +0000 UTC m=+0.205023057 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:41:40 localhost podman[92537]: unhealthy
Nov 23 03:41:40 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:41:40 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:41:40 localhost podman[92538]: 2025-11-23 08:41:40.245119316 +0000 UTC m=+0.099501954 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Nov 23 03:41:40 localhost podman[92538]: 2025-11-23 08:41:40.477705389 +0000 UTC m=+0.332088037 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, name=rhosp17/openstack-qdrouterd, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible)
Nov 23 03:41:40 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:41:41 localhost systemd[1]: tmp-crun.z1DU4p.mount: Deactivated successfully.
Nov 23 03:41:48 localhost sshd[92604]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:42:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:42:05 localhost podman[92687]: 2025-11-23 08:42:05.184247242 +0000 UTC m=+0.078124698 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, container_name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Nov 23 03:42:05 localhost podman[92684]: 2025-11-23 08:42:05.205430294 +0000 UTC m=+0.100914083 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64)
Nov 23 03:42:05 localhost podman[92684]: 2025-11-23 08:42:05.234931031 +0000 UTC m=+0.130414800 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1761123044)
Nov 23 03:42:05 localhost systemd[1]: tmp-crun.H7UHQO.mount: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92685]: 2025-11-23 08:42:05.247112909 +0000 UTC m=+0.143854612 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:42:05 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92687]: 2025-11-23 08:42:05.336523812 +0000 UTC m=+0.230401318 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12)
Nov 23 03:42:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92683]: 2025-11-23 08:42:05.351915236 +0000 UTC m=+0.248643448 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:42:05 localhost podman[92685]: 2025-11-23 08:42:05.360161419 +0000 UTC m=+0.256903112 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:42:05 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92686]: 2025-11-23 08:42:05.341110594 +0000 UTC m=+0.233980993 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Nov 23 03:42:05 localhost podman[92686]: 2025-11-23 08:42:05.420406955 +0000 UTC m=+0.313277334 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:42:05 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92683]: 2025-11-23 08:42:05.464863414 +0000 UTC m=+0.361591606 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:42:05 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:42:05 localhost podman[92688]: 2025-11-23 08:42:05.513363313 +0000 UTC m=+0.403509768 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:42:05 localhost podman[92688]: 2025-11-23 08:42:05.570020251 +0000 UTC m=+0.460166666 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:42:05 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:42:09 localhost podman[92821]: 2025-11-23 08:42:09.177717818 +0000 UTC m=+0.083263868 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git)
Nov 23 03:42:09 localhost podman[92821]: 2025-11-23 08:42:09.551014019 +0000 UTC m=+0.456560109 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Nov 23 03:42:09 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:42:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:42:11 localhost systemd[1]: tmp-crun.912Iod.mount: Deactivated successfully.
Nov 23 03:42:11 localhost podman[92841]: 2025-11-23 08:42:11.185967122 +0000 UTC m=+0.093185095 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4)
Nov 23 03:42:11 localhost podman[92841]: 2025-11-23 08:42:11.237245355 +0000 UTC m=+0.144463288 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:42:11 localhost podman[92841]: unhealthy
Nov 23 03:42:11 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:42:11 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:42:11 localhost podman[92843]: 2025-11-23 08:42:11.23782548 +0000 UTC m=+0.136383499 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Nov 23 03:42:11 localhost podman[92842]: 2025-11-23 08:42:11.29077808 +0000 UTC m=+0.192529616 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.41.4)
Nov 23 03:42:11 localhost podman[92842]: 2025-11-23 08:42:11.3749402 +0000 UTC m=+0.276691756 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container)
Nov 23 03:42:11 localhost podman[92842]: unhealthy
Nov 23 03:42:11 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:42:11 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:42:11 localhost podman[92843]: 2025-11-23 08:42:11.440036478 +0000 UTC m=+0.338594557 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:42:11 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:42:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:42:36 localhost podman[92909]: 2025-11-23 08:42:36.211650539 +0000 UTC m=+0.110516832 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com)
Nov 23 03:42:36 localhost podman[92910]: 2025-11-23 08:42:36.26319477 +0000 UTC m=+0.159209386 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute)
Nov 23 03:42:36 localhost podman[92909]: 2025-11-23 08:42:36.275829701 +0000 UTC m=+0.174696034 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:42:36 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:42:36 localhost podman[92910]: 2025-11-23 08:42:36.322954052 +0000 UTC m=+0.218968658 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:42:36 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:42:36 localhost podman[92918]: 2025-11-23 08:42:36.367736711 +0000 UTC m=+0.252518244 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vcs-type=git, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:42:36 localhost podman[92918]: 2025-11-23 08:42:36.406109586 +0000 UTC m=+0.290891119 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:42:36 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:42:36 localhost podman[92912]: 2025-11-23 08:42:36.422864838 +0000 UTC m=+0.308656909 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git)
Nov 23 03:42:36 localhost podman[92911]: 2025-11-23 08:42:36.474760228 +0000 UTC m=+0.364525696 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1)
Nov 23 03:42:36 localhost podman[92911]: 2025-11-23 08:42:36.513335489 +0000 UTC m=+0.403100967 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public)
Nov 23 03:42:36 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:42:36 localhost podman[92925]: 2025-11-23 08:42:36.528355944 +0000 UTC m=+0.409291794 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:42:36 localhost podman[92912]: 2025-11-23 08:42:36.539426383 +0000 UTC m=+0.425218474 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1)
Nov 23 03:42:36 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:42:36 localhost podman[92925]: 2025-11-23 08:42:36.585370463 +0000 UTC m=+0.466306273 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12)
Nov 23 03:42:36 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:42:40 localhost podman[93040]: 2025-11-23 08:42:40.173338427 +0000 UTC m=+0.081632483 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-type=git)
Nov 23 03:42:40 localhost podman[93040]: 2025-11-23 08:42:40.563993238 +0000 UTC m=+0.472287264 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 23 03:42:40 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:42:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:42:42 localhost podman[93062]: 2025-11-23 08:42:42.188388315 +0000 UTC m=+0.090907374 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:42:42 localhost podman[93062]: 2025-11-23 08:42:42.201786246 +0000 UTC m=+0.104305265 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, container_name=ovn_metadata_agent, url=https://www.redhat.com)
Nov 23 03:42:42 localhost podman[93062]: unhealthy
Nov 23 03:42:42 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:42:42 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:42:42 localhost systemd[1]: tmp-crun.MYiTxM.mount: Deactivated successfully.
Nov 23 03:42:42 localhost podman[93063]: 2025-11-23 08:42:42.265652199 +0000 UTC m=+0.163026129 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, container_name=ovn_controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:42:42 localhost podman[93064]: 2025-11-23 08:42:42.299005719 +0000 UTC m=+0.191779055 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, release=1761123044, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:42:42 localhost podman[93063]: 2025-11-23 08:42:42.312233306 +0000 UTC m=+0.209607186 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com)
Nov 23 03:42:42 localhost podman[93063]: unhealthy
Nov 23 03:42:42 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:42:42 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:42:42 localhost podman[93064]: 2025-11-23 08:42:42.525819029 +0000 UTC m=+0.418592395 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.expose-services=, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible)
Nov 23 03:42:42 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:43:04 localhost podman[93263]: 
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.797990843 +0000 UTC m=+0.086128365 container create 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=)
Nov 23 03:43:04 localhost systemd[1]: Started libpod-conmon-8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5.scope.
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.759926305 +0000 UTC m=+0.048063857 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 03:43:04 localhost systemd[1]: Started libcrun container.
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.880580961 +0000 UTC m=+0.168718483 container init 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, distribution-scope=public)
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.891508386 +0000 UTC m=+0.179645908 container start 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.891833394 +0000 UTC m=+0.179970916 container attach 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 03:43:04 localhost cool_mestorf[93278]: 167 167
Nov 23 03:43:04 localhost systemd[1]: libpod-8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5.scope: Deactivated successfully.
Nov 23 03:43:04 localhost podman[93263]: 2025-11-23 08:43:04.898103713 +0000 UTC m=+0.186241235 container died 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7)
Nov 23 03:43:05 localhost podman[93283]: 2025-11-23 08:43:05.001114973 +0000 UTC m=+0.089951098 container remove 8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mestorf, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True)
Nov 23 03:43:05 localhost systemd[1]: libpod-conmon-8bb07d4a8f8e07abae17fcb2fa2022908707a2a565648c39241ac9b2b98dffe5.scope: Deactivated successfully.
Nov 23 03:43:05 localhost podman[93305]: 
Nov 23 03:43:05 localhost podman[93305]: 2025-11-23 08:43:05.240057789 +0000 UTC m=+0.084956143 container create 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55)
Nov 23 03:43:05 localhost systemd[1]: Started libpod-conmon-0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a.scope.
Nov 23 03:43:05 localhost systemd[1]: Started libcrun container.
Nov 23 03:43:05 localhost podman[93305]: 2025-11-23 08:43:05.206099043 +0000 UTC m=+0.050997387 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ee4ac2ad88cd656ef333a7e1dbfb616cea2a3897e58ee0acb607d7a02ed11e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ee4ac2ad88cd656ef333a7e1dbfb616cea2a3897e58ee0acb607d7a02ed11e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 03:43:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ee4ac2ad88cd656ef333a7e1dbfb616cea2a3897e58ee0acb607d7a02ed11e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 03:43:05 localhost podman[93305]: 2025-11-23 08:43:05.31790706 +0000 UTC m=+0.162805364 container init 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 03:43:05 localhost podman[93305]: 2025-11-23 08:43:05.331291411 +0000 UTC m=+0.176189715 container start 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, release=553, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 03:43:05 localhost podman[93305]: 2025-11-23 08:43:05.332277977 +0000 UTC m=+0.177176281 container attach 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, version=7)
Nov 23 03:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-4e41f1be155276a43adc7275f2d9b1e9175b94703411ffd72df1b252222e0512-merged.mount: Deactivated successfully.
Nov 23 03:43:06 localhost funny_agnesi[93320]: [
Nov 23 03:43:06 localhost funny_agnesi[93320]:    {
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "available": false,
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "ceph_device": false,
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "lsm_data": {},
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "lvs": [],
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "path": "/dev/sr0",
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "rejected_reasons": [
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "Has a FileSystem",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "Insufficient space (<5GB)"
Nov 23 03:43:06 localhost funny_agnesi[93320]:        ],
Nov 23 03:43:06 localhost funny_agnesi[93320]:        "sys_api": {
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "actuators": null,
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "device_nodes": "sr0",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "human_readable_size": "482.00 KB",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "id_bus": "ata",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "model": "QEMU DVD-ROM",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "nr_requests": "2",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "partitions": {},
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "path": "/dev/sr0",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "removable": "1",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "rev": "2.5+",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "ro": "0",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "rotational": "1",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "sas_address": "",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "sas_device_handle": "",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "scheduler_mode": "mq-deadline",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "sectors": 0,
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "sectorsize": "2048",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "size": 493568.0,
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "support_discard": "0",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "type": "disk",
Nov 23 03:43:06 localhost funny_agnesi[93320]:            "vendor": "QEMU"
Nov 23 03:43:06 localhost funny_agnesi[93320]:        }
Nov 23 03:43:06 localhost funny_agnesi[93320]:    }
Nov 23 03:43:06 localhost funny_agnesi[93320]: ]
Nov 23 03:43:06 localhost systemd[1]: libpod-0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a.scope: Deactivated successfully.
Nov 23 03:43:06 localhost systemd[1]: libpod-0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a.scope: Consumed 1.241s CPU time.
Nov 23 03:43:06 localhost podman[93305]: 2025-11-23 08:43:06.510278331 +0000 UTC m=+1.355176645 container died 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, release=553, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph)
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:43:06 localhost systemd[1]: tmp-crun.TGG5Z8.mount: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95364]: 2025-11-23 08:43:06.65517714 +0000 UTC m=+0.097451570 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Nov 23 03:43:06 localhost podman[95364]: 2025-11-23 08:43:06.686805384 +0000 UTC m=+0.129079824 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Nov 23 03:43:06 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95360]: 2025-11-23 08:43:06.690309668 +0000 UTC m=+0.138094557 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:43:06 localhost podman[95361]: 2025-11-23 08:43:06.749611868 +0000 UTC m=+0.197686084 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.)
Nov 23 03:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-f0ee4ac2ad88cd656ef333a7e1dbfb616cea2a3897e58ee0acb607d7a02ed11e-merged.mount: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95362]: 2025-11-23 08:43:06.808365003 +0000 UTC m=+0.256348487 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., tcib_managed=true, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12)
Nov 23 03:43:06 localhost podman[95354]: 2025-11-23 08:43:06.833911762 +0000 UTC m=+0.307407004 container remove 0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_agnesi, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55)
Nov 23 03:43:06 localhost systemd[1]: libpod-conmon-0605a1fd741372f75452281dfbfc2edaa1b4c243a3200a432aff31293c0edf3a.scope: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95362]: 2025-11-23 08:43:06.84345076 +0000 UTC m=+0.291434234 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, build-date=2025-11-18T23:44:13Z, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:43:06 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95361]: 2025-11-23 08:43:06.882693019 +0000 UTC m=+0.330767285 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4)
Nov 23 03:43:06 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95408]: 2025-11-23 08:43:06.846626836 +0000 UTC m=+0.225615509 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp17/openstack-collectd)
Nov 23 03:43:06 localhost podman[95360]: 2025-11-23 08:43:06.926258714 +0000 UTC m=+0.374043583 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=)
Nov 23 03:43:06 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:43:06 localhost podman[95409]: 2025-11-23 08:43:06.969105151 +0000 UTC m=+0.345136944 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4)
Nov 23 03:43:06 localhost podman[95408]: 2025-11-23 08:43:06.97835427 +0000 UTC m=+0.357342983 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Nov 23 03:43:06 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:43:07 localhost podman[95409]: 2025-11-23 08:43:07.024115684 +0000 UTC m=+0.400147537 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:43:07 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:43:07 localhost systemd[1]: tmp-crun.GKtIiO.mount: Deactivated successfully.
Nov 23 03:43:08 localhost sshd[95514]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:43:11 localhost podman[95516]: 2025-11-23 08:43:11.193339213 +0000 UTC m=+0.095363814 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:43:11 localhost podman[95516]: 2025-11-23 08:43:11.562146904 +0000 UTC m=+0.464171485 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_migration_target, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:43:11 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:43:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:43:13 localhost podman[95540]: 2025-11-23 08:43:13.184981038 +0000 UTC m=+0.087596184 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:43:13 localhost podman[95540]: 2025-11-23 08:43:13.228929474 +0000 UTC m=+0.131544600 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com)
Nov 23 03:43:13 localhost podman[95540]: unhealthy
Nov 23 03:43:13 localhost systemd[1]: tmp-crun.W5L9ZF.mount: Deactivated successfully.
Nov 23 03:43:13 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:43:13 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:43:13 localhost podman[95541]: 2025-11-23 08:43:13.250897787 +0000 UTC m=+0.152282859 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Nov 23 03:43:13 localhost podman[95541]: 2025-11-23 08:43:13.286170449 +0000 UTC m=+0.187555521 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, version=17.1.12)
Nov 23 03:43:13 localhost podman[95541]: unhealthy
Nov 23 03:43:13 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:43:13 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:43:13 localhost podman[95542]: 2025-11-23 08:43:13.344694797 +0000 UTC m=+0.243326345 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64)
Nov 23 03:43:13 localhost podman[95542]: 2025-11-23 08:43:13.540988953 +0000 UTC m=+0.439620491 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, tcib_managed=true)
Nov 23 03:43:13 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:43:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:43:34 localhost recover_tripleo_nova_virtqemud[95608]: 61733
Nov 23 03:43:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:43:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:43:37 localhost podman[95610]: 2025-11-23 08:43:37.201373061 +0000 UTC m=+0.094766537 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, release=1761123044, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:43:37 localhost systemd[1]: tmp-crun.6xqV2J.mount: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95612]: 2025-11-23 08:43:37.259591752 +0000 UTC m=+0.148437346 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:43:37 localhost podman[95612]: 2025-11-23 08:43:37.271867794 +0000 UTC m=+0.160713338 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:43:37 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95613]: 2025-11-23 08:43:37.318731458 +0000 UTC m=+0.204534570 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Nov 23 03:43:37 localhost podman[95610]: 2025-11-23 08:43:37.336109118 +0000 UTC m=+0.229502574 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Nov 23 03:43:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95620]: 2025-11-23 08:43:37.407357179 +0000 UTC m=+0.293356706 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, version=17.1.12)
Nov 23 03:43:37 localhost podman[95611]: 2025-11-23 08:43:37.408425888 +0000 UTC m=+0.300135869 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, vcs-type=git, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:43:37 localhost podman[95609]: 2025-11-23 08:43:37.462122278 +0000 UTC m=+0.358542875 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git)
Nov 23 03:43:37 localhost podman[95613]: 2025-11-23 08:43:37.480437981 +0000 UTC m=+0.366241093 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:43:37 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95609]: 2025-11-23 08:43:37.497341217 +0000 UTC m=+0.393761814 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4)
Nov 23 03:43:37 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95620]: 2025-11-23 08:43:37.536474303 +0000 UTC m=+0.422473830 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:43:37 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:43:37 localhost podman[95611]: 2025-11-23 08:43:37.591923099 +0000 UTC m=+0.483633030 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com)
Nov 23 03:43:37 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:43:42 localhost podman[95747]: 2025-11-23 08:43:42.171840768 +0000 UTC m=+0.078514059 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute)
Nov 23 03:43:42 localhost podman[95747]: 2025-11-23 08:43:42.571494461 +0000 UTC m=+0.478167752 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Nov 23 03:43:42 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:43:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:43:44 localhost podman[95772]: 2025-11-23 08:43:44.171047457 +0000 UTC m=+0.079907816 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:43:44 localhost podman[95772]: 2025-11-23 08:43:44.21006858 +0000 UTC m=+0.118928909 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, version=17.1.12, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller)
Nov 23 03:43:44 localhost podman[95772]: unhealthy
Nov 23 03:43:44 localhost podman[95771]: 2025-11-23 08:43:44.222047564 +0000 UTC m=+0.131084138 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:43:44 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:43:44 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:43:44 localhost podman[95771]: 2025-11-23 08:43:44.241941081 +0000 UTC m=+0.150977665 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64)
Nov 23 03:43:44 localhost podman[95771]: unhealthy
Nov 23 03:43:44 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:43:44 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:43:44 localhost systemd[1]: tmp-crun.ZutNot.mount: Deactivated successfully.
Nov 23 03:43:44 localhost podman[95773]: 2025-11-23 08:43:44.338521646 +0000 UTC m=+0.245318099 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044)
Nov 23 03:43:44 localhost podman[95773]: 2025-11-23 08:43:44.581955604 +0000 UTC m=+0.488752017 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd)
Nov 23 03:43:44 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:44:07 localhost systemd[1]: tmp-crun.WklAIk.mount: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95855]: 2025-11-23 08:44:07.70471674 +0000 UTC m=+0.100248536 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd)
Nov 23 03:44:07 localhost podman[95855]: 2025-11-23 08:44:07.709446228 +0000 UTC m=+0.104978024 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:44:07 localhost systemd[1]: tmp-crun.67UMvt.mount: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95885]: 2025-11-23 08:44:07.716782345 +0000 UTC m=+0.093758580 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:44:07 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95885]: 2025-11-23 08:44:07.746829786 +0000 UTC m=+0.123806071 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi)
Nov 23 03:44:07 localhost podman[95857]: 2025-11-23 08:44:07.758266735 +0000 UTC m=+0.143271167 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044)
Nov 23 03:44:07 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95854]: 2025-11-23 08:44:07.806715783 +0000 UTC m=+0.203297827 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:44:07 localhost podman[95853]: 2025-11-23 08:44:07.848305314 +0000 UTC m=+0.246285896 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible)
Nov 23 03:44:07 localhost podman[95854]: 2025-11-23 08:44:07.856921146 +0000 UTC m=+0.253503220 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_compute)
Nov 23 03:44:07 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95857]: 2025-11-23 08:44:07.878095948 +0000 UTC m=+0.263100400 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.)
Nov 23 03:44:07 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95853]: 2025-11-23 08:44:07.908263912 +0000 UTC m=+0.306244524 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container)
Nov 23 03:44:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:44:07 localhost podman[95873]: 2025-11-23 08:44:07.825845148 +0000 UTC m=+0.206619485 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 23 03:44:07 localhost podman[95873]: 2025-11-23 08:44:07.955890107 +0000 UTC m=+0.336664474 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, distribution-scope=public, vcs-type=git)
Nov 23 03:44:07 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:44:13 localhost podman[96101]: 2025-11-23 08:44:13.186678296 +0000 UTC m=+0.090808710 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z)
Nov 23 03:44:13 localhost podman[96101]: 2025-11-23 08:44:13.565973859 +0000 UTC m=+0.470104253 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:44:13 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:44:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:44:15 localhost systemd[1]: tmp-crun.xkJJaK.mount: Deactivated successfully.
Nov 23 03:44:15 localhost podman[96124]: 2025-11-23 08:44:15.191341494 +0000 UTC m=+0.096865385 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Nov 23 03:44:15 localhost podman[96124]: 2025-11-23 08:44:15.230990053 +0000 UTC m=+0.136513964 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:44:15 localhost podman[96124]: unhealthy
Nov 23 03:44:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:44:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:44:15 localhost podman[96125]: 2025-11-23 08:44:15.246617535 +0000 UTC m=+0.149263498 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, config_id=tripleo_step4, release=1761123044, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.)
Nov 23 03:44:15 localhost podman[96126]: 2025-11-23 08:44:15.30020654 +0000 UTC m=+0.199334549 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:44:15 localhost podman[96125]: 2025-11-23 08:44:15.31202139 +0000 UTC m=+0.214667343 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4)
Nov 23 03:44:15 localhost podman[96125]: unhealthy
Nov 23 03:44:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:44:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:44:15 localhost podman[96126]: 2025-11-23 08:44:15.516217169 +0000 UTC m=+0.415345158 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, version=17.1.12, config_id=tripleo_step1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:44:15 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:44:16 localhost systemd[1]: tmp-crun.rME036.mount: Deactivated successfully.
Nov 23 03:44:28 localhost sshd[96192]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:44:38 localhost podman[96195]: 2025-11-23 08:44:38.195714106 +0000 UTC m=+0.096601567 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:44:38 localhost systemd[1]: tmp-crun.Cf0j3s.mount: Deactivated successfully.
Nov 23 03:44:38 localhost podman[96210]: 2025-11-23 08:44:38.230992418 +0000 UTC m=+0.116284988 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Nov 23 03:44:38 localhost podman[96210]: 2025-11-23 08:44:38.280835593 +0000 UTC m=+0.166128163 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-type=git, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:44:38 localhost podman[96202]: 2025-11-23 08:44:38.289460776 +0000 UTC m=+0.180848951 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:44:38 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:44:38 localhost podman[96196]: 2025-11-23 08:44:38.32188514 +0000 UTC m=+0.217843517 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:44:38 localhost podman[96202]: 2025-11-23 08:44:38.329801674 +0000 UTC m=+0.221189839 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4)
Nov 23 03:44:38 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:44:38 localhost podman[96196]: 2025-11-23 08:44:38.349767863 +0000 UTC m=+0.245726230 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Nov 23 03:44:38 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:44:38 localhost podman[96197]: 2025-11-23 08:44:38.323453473 +0000 UTC m=+0.217404837 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 03:44:38 localhost podman[96195]: 2025-11-23 08:44:38.382479706 +0000 UTC m=+0.283367167 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:44:38 localhost podman[96197]: 2025-11-23 08:44:38.407919252 +0000 UTC m=+0.301870616 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-collectd)
Nov 23 03:44:38 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:44:38 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:44:38 localhost podman[96194]: 2025-11-23 08:44:38.470580692 +0000 UTC m=+0.373660722 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-cron)
Nov 23 03:44:38 localhost podman[96194]: 2025-11-23 08:44:38.508095045 +0000 UTC m=+0.411175045 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_id=tripleo_step4, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Nov 23 03:44:38 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:44:44 localhost systemd[1]: tmp-crun.GByt1e.mount: Deactivated successfully.
Nov 23 03:44:44 localhost podman[96330]: 2025-11-23 08:44:44.181673352 +0000 UTC m=+0.089114046 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:44:44 localhost podman[96330]: 2025-11-23 08:44:44.559150766 +0000 UTC m=+0.466591450 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:44:44 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:44:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:44:46 localhost podman[96353]: 2025-11-23 08:44:46.18194298 +0000 UTC m=+0.085787105 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:44:46 localhost podman[96353]: 2025-11-23 08:44:46.228106476 +0000 UTC m=+0.131950631 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:44:46 localhost podman[96353]: unhealthy
Nov 23 03:44:46 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:44:46 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:44:46 localhost podman[96352]: 2025-11-23 08:44:46.22863232 +0000 UTC m=+0.135301511 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:44:46 localhost podman[96354]: 2025-11-23 08:44:46.296225304 +0000 UTC m=+0.196158954 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:44:46 localhost podman[96352]: 2025-11-23 08:44:46.312879643 +0000 UTC m=+0.219548804 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Nov 23 03:44:46 localhost podman[96352]: unhealthy
Nov 23 03:44:46 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:44:46 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:44:46 localhost podman[96354]: 2025-11-23 08:44:46.525247153 +0000 UTC m=+0.425180803 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team)
Nov 23 03:44:46 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:45:09 localhost systemd[1]: tmp-crun.b01u0X.mount: Deactivated successfully.
Nov 23 03:45:09 localhost podman[96424]: 2025-11-23 08:45:09.204777109 +0000 UTC m=+0.099010483 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:45:09 localhost podman[96427]: 2025-11-23 08:45:09.263597666 +0000 UTC m=+0.149681500 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:45:09 localhost podman[96427]: 2025-11-23 08:45:09.273936415 +0000 UTC m=+0.160020229 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Nov 23 03:45:09 localhost podman[96425]: 2025-11-23 08:45:09.305524118 +0000 UTC m=+0.195747543 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4)
Nov 23 03:45:09 localhost podman[96426]: 2025-11-23 08:45:09.276012461 +0000 UTC m=+0.161677054 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:45:09 localhost podman[96423]: 2025-11-23 08:45:09.357427658 +0000 UTC m=+0.253037078 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, release=1761123044, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:45:09 localhost podman[96426]: 2025-11-23 08:45:09.360006827 +0000 UTC m=+0.245671460 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public)
Nov 23 03:45:09 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:45:09 localhost podman[96423]: 2025-11-23 08:45:09.370878031 +0000 UTC m=+0.266487481 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:45:09 localhost podman[96425]: 2025-11-23 08:45:09.386371128 +0000 UTC m=+0.276594503 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team)
Nov 23 03:45:09 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:45:09 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:45:09 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:45:09 localhost podman[96437]: 2025-11-23 08:45:09.231755187 +0000 UTC m=+0.112998460 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:45:09 localhost podman[96424]: 2025-11-23 08:45:09.441190638 +0000 UTC m=+0.335424012 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Nov 23 03:45:09 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:45:09 localhost podman[96437]: 2025-11-23 08:45:09.465895814 +0000 UTC m=+0.347139117 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:45:09 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:45:10 localhost systemd[1]: tmp-crun.Uotgsi.mount: Deactivated successfully.
Nov 23 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:45:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:45:15 localhost recover_tripleo_nova_virtqemud[96638]: 61733
Nov 23 03:45:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:45:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:45:15 localhost podman[96636]: 2025-11-23 08:45:15.191412501 +0000 UTC m=+0.096233538 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_migration_target, distribution-scope=public)
Nov 23 03:45:15 localhost podman[96636]: 2025-11-23 08:45:15.576251634 +0000 UTC m=+0.481072661 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_migration_target, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:45:15 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:45:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:45:17 localhost podman[96662]: 2025-11-23 08:45:17.247659819 +0000 UTC m=+0.083775641 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:45:17 localhost podman[96661]: 2025-11-23 08:45:17.299366934 +0000 UTC m=+0.137126751 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Nov 23 03:45:17 localhost podman[96662]: 2025-11-23 08:45:17.315112779 +0000 UTC m=+0.151228641 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:45:17 localhost podman[96662]: unhealthy
Nov 23 03:45:17 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:45:17 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:45:17 localhost podman[96661]: 2025-11-23 08:45:17.339907498 +0000 UTC m=+0.177667305 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Nov 23 03:45:17 localhost podman[96661]: unhealthy
Nov 23 03:45:17 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:45:17 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:45:17 localhost podman[96663]: 2025-11-23 08:45:17.400935774 +0000 UTC m=+0.232877334 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:45:17 localhost podman[96663]: 2025-11-23 08:45:17.605892894 +0000 UTC m=+0.437834434 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:45:17 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:45:40 localhost podman[96732]: 2025-11-23 08:45:40.2052185 +0000 UTC m=+0.101651205 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:45:40 localhost systemd[1]: tmp-crun.nHIGHj.mount: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96746]: 2025-11-23 08:45:40.266102872 +0000 UTC m=+0.154691065 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:45:40 localhost systemd[1]: tmp-crun.fNH6Ou.mount: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96733]: 2025-11-23 08:45:40.313072979 +0000 UTC m=+0.205049743 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:45:40 localhost podman[96746]: 2025-11-23 08:45:40.325082733 +0000 UTC m=+0.213670966 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible)
Nov 23 03:45:40 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96733]: 2025-11-23 08:45:40.350351485 +0000 UTC m=+0.242328269 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z)
Nov 23 03:45:40 localhost podman[96732]: 2025-11-23 08:45:40.370013515 +0000 UTC m=+0.266446240 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:45:40 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96731]: 2025-11-23 08:45:40.376872861 +0000 UTC m=+0.277246612 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step5, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container)
Nov 23 03:45:40 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96740]: 2025-11-23 08:45:40.465240834 +0000 UTC m=+0.354189747 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12)
Nov 23 03:45:40 localhost podman[96730]: 2025-11-23 08:45:40.429957223 +0000 UTC m=+0.333650053 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:45:40 localhost podman[96731]: 2025-11-23 08:45:40.489746976 +0000 UTC m=+0.390120757 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Nov 23 03:45:40 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96740]: 2025-11-23 08:45:40.504744171 +0000 UTC m=+0.393693044 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid)
Nov 23 03:45:40 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:45:40 localhost podman[96730]: 2025-11-23 08:45:40.5710806 +0000 UTC m=+0.474773480 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git)
Nov 23 03:45:40 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:45:46 localhost systemd[1]: tmp-crun.peWQWn.mount: Deactivated successfully.
Nov 23 03:45:46 localhost podman[96862]: 2025-11-23 08:45:46.180806844 +0000 UTC m=+0.086309400 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Nov 23 03:45:46 localhost podman[96862]: 2025-11-23 08:45:46.564393873 +0000 UTC m=+0.469896439 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:45:46 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:45:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:45:48 localhost podman[96888]: 2025-11-23 08:45:48.173852967 +0000 UTC m=+0.071333635 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:45:48 localhost systemd[1]: tmp-crun.N3zty4.mount: Deactivated successfully.
Nov 23 03:45:48 localhost podman[96887]: 2025-11-23 08:45:48.272358894 +0000 UTC m=+0.171417165 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:45:48 localhost podman[96887]: 2025-11-23 08:45:48.291121371 +0000 UTC m=+0.190179592 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:45:48 localhost podman[96887]: unhealthy
Nov 23 03:45:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:45:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:45:48 localhost podman[96886]: 2025-11-23 08:45:48.353523394 +0000 UTC m=+0.256292675 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Nov 23 03:45:48 localhost podman[96886]: 2025-11-23 08:45:48.364309975 +0000 UTC m=+0.267079206 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64)
Nov 23 03:45:48 localhost podman[96886]: unhealthy
Nov 23 03:45:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:45:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:45:48 localhost podman[96888]: 2025-11-23 08:45:48.41002413 +0000 UTC m=+0.307504868 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., container_name=metrics_qdr)
Nov 23 03:45:48 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:45:53 localhost sshd[96952]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:46:11 localhost podman[96955]: 2025-11-23 08:46:11.194816108 +0000 UTC m=+0.095368064 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git)
Nov 23 03:46:11 localhost podman[96955]: 2025-11-23 08:46:11.255246828 +0000 UTC m=+0.155798754 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5)
Nov 23 03:46:11 localhost systemd[1]: tmp-crun.21Ki2H.mount: Deactivated successfully.
Nov 23 03:46:11 localhost podman[96963]: 2025-11-23 08:46:11.257799647 +0000 UTC m=+0.148975751 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Nov 23 03:46:11 localhost podman[96963]: 2025-11-23 08:46:11.265856984 +0000 UTC m=+0.157033128 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4)
Nov 23 03:46:11 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:46:11 localhost podman[96970]: 2025-11-23 08:46:11.308354141 +0000 UTC m=+0.192228567 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z)
Nov 23 03:46:11 localhost podman[96970]: 2025-11-23 08:46:11.396075418 +0000 UTC m=+0.279949854 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1)
Nov 23 03:46:11 localhost podman[96954]: 2025-11-23 08:46:11.404121165 +0000 UTC m=+0.306222953 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=logrotate_crond, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc.)
Nov 23 03:46:11 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:46:11 localhost podman[96956]: 2025-11-23 08:46:11.364342171 +0000 UTC m=+0.260953401 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:46:11 localhost podman[96956]: 2025-11-23 08:46:11.444131495 +0000 UTC m=+0.340742705 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12)
Nov 23 03:46:11 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:46:11 localhost podman[96957]: 2025-11-23 08:46:11.463481117 +0000 UTC m=+0.355292767 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, container_name=collectd, release=1761123044, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc.)
Nov 23 03:46:11 localhost podman[96954]: 2025-11-23 08:46:11.467353921 +0000 UTC m=+0.369455679 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible)
Nov 23 03:46:11 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:46:11 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:46:11 localhost podman[96957]: 2025-11-23 08:46:11.497646618 +0000 UTC m=+0.389458218 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-type=git, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3)
Nov 23 03:46:11 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:46:12 localhost podman[97190]: 2025-11-23 08:46:12.314814486 +0000 UTC m=+0.095721343 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Nov 23 03:46:12 localhost podman[97190]: 2025-11-23 08:46:12.42093613 +0000 UTC m=+0.201842977 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:46:17 localhost systemd[1]: tmp-crun.cWdJte.mount: Deactivated successfully.
Nov 23 03:46:17 localhost podman[97333]: 2025-11-23 08:46:17.182606301 +0000 UTC m=+0.085886767 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4)
Nov 23 03:46:17 localhost podman[97333]: 2025-11-23 08:46:17.583756575 +0000 UTC m=+0.487037061 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute)
Nov 23 03:46:17 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:46:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:46:19 localhost systemd[1]: tmp-crun.2PPjT2.mount: Deactivated successfully.
Nov 23 03:46:19 localhost podman[97358]: 2025-11-23 08:46:19.193089846 +0000 UTC m=+0.090990856 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 03:46:19 localhost podman[97358]: 2025-11-23 08:46:19.239017894 +0000 UTC m=+0.136918884 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:46:19 localhost podman[97358]: unhealthy
Nov 23 03:46:19 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:46:19 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:46:19 localhost podman[97357]: 2025-11-23 08:46:19.288718566 +0000 UTC m=+0.188300062 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:46:19 localhost podman[97359]: 2025-11-23 08:46:19.243193258 +0000 UTC m=+0.137954943 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1)
Nov 23 03:46:19 localhost podman[97357]: 2025-11-23 08:46:19.305432347 +0000 UTC m=+0.205013833 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Nov 23 03:46:19 localhost podman[97357]: unhealthy
Nov 23 03:46:19 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:46:19 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:46:19 localhost podman[97359]: 2025-11-23 08:46:19.413685308 +0000 UTC m=+0.308447003 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:46:19 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:46:42 localhost systemd[1]: tmp-crun.JXLijT.mount: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97426]: 2025-11-23 08:46:42.208637097 +0000 UTC m=+0.102690552 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:46:42 localhost systemd[1]: tmp-crun.9KEBCi.mount: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97427]: 2025-11-23 08:46:42.261328269 +0000 UTC m=+0.154639044 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team)
Nov 23 03:46:42 localhost podman[97426]: 2025-11-23 08:46:42.264452693 +0000 UTC m=+0.158506178 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step5)
Nov 23 03:46:42 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97427]: 2025-11-23 08:46:42.293862356 +0000 UTC m=+0.187173101 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, release=1761123044, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:46:42 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97425]: 2025-11-23 08:46:42.310001082 +0000 UTC m=+0.207062028 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:46:42 localhost podman[97425]: 2025-11-23 08:46:42.320797204 +0000 UTC m=+0.217858150 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron)
Nov 23 03:46:42 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97440]: 2025-11-23 08:46:42.364133562 +0000 UTC m=+0.246673126 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc.)
Nov 23 03:46:42 localhost podman[97428]: 2025-11-23 08:46:42.412453227 +0000 UTC m=+0.302674988 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Nov 23 03:46:42 localhost podman[97440]: 2025-11-23 08:46:42.41813566 +0000 UTC m=+0.300675214 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:46:42 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97429]: 2025-11-23 08:46:42.433434952 +0000 UTC m=+0.316652074 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-iscsid-container, architecture=x86_64, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:46:42 localhost podman[97429]: 2025-11-23 08:46:42.448059407 +0000 UTC m=+0.331276499 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T23:44:13Z, container_name=iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com)
Nov 23 03:46:42 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:46:42 localhost podman[97428]: 2025-11-23 08:46:42.47409976 +0000 UTC m=+0.364321461 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3)
Nov 23 03:46:42 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:46:48 localhost systemd[1]: tmp-crun.98Arjg.mount: Deactivated successfully.
Nov 23 03:46:48 localhost podman[97560]: 2025-11-23 08:46:48.180594485 +0000 UTC m=+0.088549891 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64)
Nov 23 03:46:48 localhost podman[97560]: 2025-11-23 08:46:48.556030884 +0000 UTC m=+0.463986340 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044)
Nov 23 03:46:48 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:46:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:46:50 localhost podman[97584]: 2025-11-23 08:46:50.182106576 +0000 UTC m=+0.087013338 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git)
Nov 23 03:46:50 localhost podman[97585]: 2025-11-23 08:46:50.233161413 +0000 UTC m=+0.136367259 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:46:50 localhost podman[97584]: 2025-11-23 08:46:50.252761082 +0000 UTC m=+0.157667854 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Nov 23 03:46:50 localhost podman[97585]: 2025-11-23 08:46:50.278160998 +0000 UTC m=+0.181366804 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:46:50 localhost podman[97585]: unhealthy
Nov 23 03:46:50 localhost podman[97586]: 2025-11-23 08:46:50.292861165 +0000 UTC m=+0.192248108 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:46:50 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:46:50 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:46:50 localhost podman[97584]: unhealthy
Nov 23 03:46:50 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:46:50 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:46:50 localhost podman[97586]: 2025-11-23 08:46:50.489180462 +0000 UTC m=+0.388567365 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-type=git, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z)
Nov 23 03:46:50 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:47:04 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:47:04 localhost recover_tripleo_nova_virtqemud[97654]: 61733
Nov 23 03:47:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:47:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:47:13 localhost podman[97657]: 2025-11-23 08:47:13.20064308 +0000 UTC m=+0.094629814 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:47:13 localhost systemd[1]: tmp-crun.GJ8i0C.mount: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97657]: 2025-11-23 08:47:13.263068924 +0000 UTC m=+0.157055718 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4)
Nov 23 03:47:13 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97670]: 2025-11-23 08:47:13.312182279 +0000 UTC m=+0.195165196 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, architecture=x86_64)
Nov 23 03:47:13 localhost podman[97670]: 2025-11-23 08:47:13.348855359 +0000 UTC m=+0.231838216 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:47:13 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97656]: 2025-11-23 08:47:13.263806384 +0000 UTC m=+0.160546233 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.12)
Nov 23 03:47:13 localhost podman[97658]: 2025-11-23 08:47:13.368428757 +0000 UTC m=+0.251960629 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z)
Nov 23 03:47:13 localhost podman[97656]: 2025-11-23 08:47:13.400016319 +0000 UTC m=+0.296756168 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Nov 23 03:47:13 localhost podman[97658]: 2025-11-23 08:47:13.409127874 +0000 UTC m=+0.292659756 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:47:13 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97655]: 2025-11-23 08:47:13.415331073 +0000 UTC m=+0.315130404 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-type=git)
Nov 23 03:47:13 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97655]: 2025-11-23 08:47:13.429178395 +0000 UTC m=+0.328977716 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:47:13 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:47:13 localhost podman[97664]: 2025-11-23 08:47:13.472564836 +0000 UTC m=+0.359602842 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, container_name=iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:47:13 localhost podman[97664]: 2025-11-23 08:47:13.487937401 +0000 UTC m=+0.374975417 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:47:13 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:47:17 localhost sshd[97868]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:47:19 localhost podman[97870]: 2025-11-23 08:47:19.152787043 +0000 UTC m=+0.099158957 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:47:19 localhost podman[97870]: 2025-11-23 08:47:19.53003283 +0000 UTC m=+0.476404744 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Nov 23 03:47:19 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:47:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:47:21 localhost podman[97894]: 2025-11-23 08:47:21.186254866 +0000 UTC m=+0.083775701 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:47:21 localhost podman[97894]: 2025-11-23 08:47:21.202752811 +0000 UTC m=+0.100273686 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12)
Nov 23 03:47:21 localhost podman[97894]: unhealthy
Nov 23 03:47:21 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:47:21 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:47:21 localhost podman[97893]: 2025-11-23 08:47:21.244734274 +0000 UTC m=+0.142435663 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z)
Nov 23 03:47:21 localhost podman[97895]: 2025-11-23 08:47:21.301978399 +0000 UTC m=+0.195952468 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:47:21 localhost podman[97893]: 2025-11-23 08:47:21.315221746 +0000 UTC m=+0.212923185 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:47:21 localhost podman[97893]: unhealthy
Nov 23 03:47:21 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:47:21 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:47:21 localhost podman[97895]: 2025-11-23 08:47:21.527052041 +0000 UTC m=+0.421026110 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, tcib_managed=true)
Nov 23 03:47:21 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:47:44 localhost podman[97960]: 2025-11-23 08:47:44.200939367 +0000 UTC m=+0.090593945 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:47:44 localhost podman[97962]: 2025-11-23 08:47:44.258404247 +0000 UTC m=+0.142309590 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:47:44 localhost podman[97962]: 2025-11-23 08:47:44.267850943 +0000 UTC m=+0.151756286 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public)
Nov 23 03:47:44 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:47:44 localhost podman[97961]: 2025-11-23 08:47:44.315273702 +0000 UTC m=+0.200860420 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:47:44 localhost podman[97966]: 2025-11-23 08:47:44.359147465 +0000 UTC m=+0.239219515 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Nov 23 03:47:44 localhost podman[97966]: 2025-11-23 08:47:44.37190501 +0000 UTC m=+0.251977050 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:47:44 localhost podman[97960]: 2025-11-23 08:47:44.379083564 +0000 UTC m=+0.268738192 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_compute)
Nov 23 03:47:44 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:47:44 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:47:44 localhost podman[97974]: 2025-11-23 08:47:44.464084157 +0000 UTC m=+0.342230454 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 03:47:44 localhost podman[97961]: 2025-11-23 08:47:44.483958403 +0000 UTC m=+0.369545131 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Nov 23 03:47:44 localhost podman[97974]: 2025-11-23 08:47:44.501065515 +0000 UTC m=+0.379211842 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Nov 23 03:47:44 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:47:44 localhost podman[97959]: 2025-11-23 08:47:44.51941111 +0000 UTC m=+0.411087542 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Nov 23 03:47:44 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:47:44 localhost podman[97959]: 2025-11-23 08:47:44.554965829 +0000 UTC m=+0.446642251 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, release=1761123044, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Nov 23 03:47:44 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:47:45 localhost systemd[1]: tmp-crun.x231fN.mount: Deactivated successfully.
Nov 23 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:47:50 localhost podman[98094]: 2025-11-23 08:47:50.179560024 +0000 UTC m=+0.085198759 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Nov 23 03:47:50 localhost podman[98094]: 2025-11-23 08:47:50.582396633 +0000 UTC m=+0.488035378 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:47:50 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:47:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:47:52 localhost podman[98119]: 2025-11-23 08:47:52.166198195 +0000 UTC m=+0.070624907 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Nov 23 03:47:52 localhost systemd[1]: tmp-crun.BZ1nAI.mount: Deactivated successfully.
Nov 23 03:47:52 localhost podman[98117]: 2025-11-23 08:47:52.239193824 +0000 UTC m=+0.146841552 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:47:52 localhost podman[98117]: 2025-11-23 08:47:52.284138087 +0000 UTC m=+0.191785815 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Nov 23 03:47:52 localhost podman[98117]: unhealthy
Nov 23 03:47:52 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:47:52 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:47:52 localhost podman[98118]: 2025-11-23 08:47:52.299840221 +0000 UTC m=+0.203271706 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:47:52 localhost podman[98118]: 2025-11-23 08:47:52.316993394 +0000 UTC m=+0.220424929 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com)
Nov 23 03:47:52 localhost podman[98118]: unhealthy
Nov 23 03:47:52 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:47:52 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:47:52 localhost podman[98119]: 2025-11-23 08:47:52.381964046 +0000 UTC m=+0.286390778 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:47:52 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:48:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:48:14 localhost recover_tripleo_nova_virtqemud[98186]: 61733
Nov 23 03:48:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:48:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:48:15 localhost podman[98187]: 2025-11-23 08:48:15.209635121 +0000 UTC m=+0.115353703 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:48:15 localhost podman[98187]: 2025-11-23 08:48:15.221054129 +0000 UTC m=+0.126772721 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible)
Nov 23 03:48:15 localhost podman[98196]: 2025-11-23 08:48:15.261219662 +0000 UTC m=+0.151283153 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:48:15 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:48:15 localhost podman[98196]: 2025-11-23 08:48:15.324954762 +0000 UTC m=+0.215018283 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, container_name=iscsid, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:48:15 localhost podman[98188]: 2025-11-23 08:48:15.32596421 +0000 UTC m=+0.226308207 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:48:15 localhost podman[98189]: 2025-11-23 08:48:15.359223166 +0000 UTC m=+0.257485837 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:48:15 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:48:15 localhost podman[98189]: 2025-11-23 08:48:15.409874813 +0000 UTC m=+0.308137504 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:48:15 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:48:15 localhost podman[98207]: 2025-11-23 08:48:15.431771214 +0000 UTC m=+0.315282667 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:48:15 localhost podman[98188]: 2025-11-23 08:48:15.46089678 +0000 UTC m=+0.361240747 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Nov 23 03:48:15 localhost podman[98190]: 2025-11-23 08:48:15.474173558 +0000 UTC m=+0.367764413 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.12, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:48:15 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:48:15 localhost podman[98207]: 2025-11-23 08:48:15.512605065 +0000 UTC m=+0.396116488 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true)
Nov 23 03:48:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:48:15 localhost podman[98190]: 2025-11-23 08:48:15.535190414 +0000 UTC m=+0.428781339 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:48:15 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:48:21 localhost podman[98403]: 2025-11-23 08:48:21.181886865 +0000 UTC m=+0.087500252 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:48:21 localhost podman[98403]: 2025-11-23 08:48:21.553073249 +0000 UTC m=+0.458686666 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Nov 23 03:48:21 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:48:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:48:23 localhost podman[98425]: 2025-11-23 08:48:23.176137491 +0000 UTC m=+0.090422241 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Nov 23 03:48:23 localhost podman[98426]: 2025-11-23 08:48:23.236782096 +0000 UTC m=+0.144740695 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:48:23 localhost podman[98426]: 2025-11-23 08:48:23.247263489 +0000 UTC m=+0.155222058 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc.)
Nov 23 03:48:23 localhost podman[98426]: unhealthy
Nov 23 03:48:23 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:48:23 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:48:23 localhost podman[98425]: 2025-11-23 08:48:23.269341275 +0000 UTC m=+0.183626015 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Nov 23 03:48:23 localhost podman[98425]: unhealthy
Nov 23 03:48:23 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:48:23 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:48:23 localhost podman[98427]: 2025-11-23 08:48:23.353051194 +0000 UTC m=+0.258930837 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Nov 23 03:48:23 localhost podman[98427]: 2025-11-23 08:48:23.551073447 +0000 UTC m=+0.456953140 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, distribution-scope=public, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1)
Nov 23 03:48:23 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:48:46 localhost systemd[1]: tmp-crun.D829yL.mount: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98496]: 2025-11-23 08:48:46.212339669 +0000 UTC m=+0.108617861 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:48:46 localhost podman[98495]: 2025-11-23 08:48:46.255910494 +0000 UTC m=+0.156291767 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4)
Nov 23 03:48:46 localhost podman[98495]: 2025-11-23 08:48:46.265815852 +0000 UTC m=+0.166197135 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, release=1761123044, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:48:46 localhost podman[98497]: 2025-11-23 08:48:46.268358881 +0000 UTC m=+0.157747737 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 23 03:48:46 localhost podman[98496]: 2025-11-23 08:48:46.270987452 +0000 UTC m=+0.167265564 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:48:46 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98503]: 2025-11-23 08:48:46.307542428 +0000 UTC m=+0.191806947 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public)
Nov 23 03:48:46 localhost podman[98503]: 2025-11-23 08:48:46.317809715 +0000 UTC m=+0.202074314 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Nov 23 03:48:46 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98509]: 2025-11-23 08:48:46.362089359 +0000 UTC m=+0.242450332 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:48:46 localhost podman[98509]: 2025-11-23 08:48:46.368978286 +0000 UTC m=+0.249339289 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3)
Nov 23 03:48:46 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98514]: 2025-11-23 08:48:46.427738661 +0000 UTC m=+0.305279028 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, release=1761123044)
Nov 23 03:48:46 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98497]: 2025-11-23 08:48:46.452288494 +0000 UTC m=+0.341677360 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Nov 23 03:48:46 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:48:46 localhost podman[98514]: 2025-11-23 08:48:46.488008377 +0000 UTC m=+0.365548734 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:48:46 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:48:52 localhost podman[98629]: 2025-11-23 08:48:52.180680768 +0000 UTC m=+0.087226004 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:48:52 localhost podman[98629]: 2025-11-23 08:48:52.577994688 +0000 UTC m=+0.484539954 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z)
Nov 23 03:48:52 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:48:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:48:54 localhost systemd[1]: tmp-crun.RRQagy.mount: Deactivated successfully.
Nov 23 03:48:54 localhost podman[98652]: 2025-11-23 08:48:54.193366823 +0000 UTC m=+0.099023033 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:48:54 localhost podman[98653]: 2025-11-23 08:48:54.230857914 +0000 UTC m=+0.135574919 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4)
Nov 23 03:48:54 localhost podman[98652]: 2025-11-23 08:48:54.284641815 +0000 UTC m=+0.190298025 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:14:25Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Nov 23 03:48:54 localhost podman[98652]: unhealthy
Nov 23 03:48:54 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:48:54 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:48:54 localhost podman[98654]: 2025-11-23 08:48:54.302819356 +0000 UTC m=+0.199877924 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, release=1761123044, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4)
Nov 23 03:48:54 localhost podman[98653]: 2025-11-23 08:48:54.3207869 +0000 UTC m=+0.225503915 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12)
Nov 23 03:48:54 localhost podman[98653]: unhealthy
Nov 23 03:48:54 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:48:54 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:48:54 localhost podman[98654]: 2025-11-23 08:48:54.515948636 +0000 UTC m=+0.413007164 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Nov 23 03:48:54 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:49:05 localhost systemd[1]: session-28.scope: Deactivated successfully.
Nov 23 03:49:05 localhost systemd[1]: session-28.scope: Consumed 7min 13.087s CPU time.
Nov 23 03:49:05 localhost systemd-logind[761]: Session 28 logged out. Waiting for processes to exit.
Nov 23 03:49:05 localhost systemd-logind[761]: Removed session 28.
Nov 23 03:49:16 localhost systemd[1]: Stopping User Manager for UID 1003...
Nov 23 03:49:16 localhost systemd[35892]: Activating special unit Exit the Session...
Nov 23 03:49:16 localhost systemd[35892]: Removed slice User Background Tasks Slice.
Nov 23 03:49:16 localhost systemd[35892]: Stopped target Main User Target.
Nov 23 03:49:16 localhost systemd[35892]: Stopped target Basic System.
Nov 23 03:49:16 localhost systemd[35892]: Stopped target Paths.
Nov 23 03:49:16 localhost systemd[35892]: Stopped target Sockets.
Nov 23 03:49:16 localhost systemd[35892]: Stopped target Timers.
Nov 23 03:49:16 localhost systemd[35892]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 03:49:16 localhost systemd[35892]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 03:49:16 localhost systemd[35892]: Closed D-Bus User Message Bus Socket.
Nov 23 03:49:16 localhost systemd[35892]: Stopped Create User's Volatile Files and Directories.
Nov 23 03:49:16 localhost systemd[35892]: Removed slice User Application Slice.
Nov 23 03:49:16 localhost systemd[35892]: Reached target Shutdown.
Nov 23 03:49:16 localhost systemd[35892]: Finished Exit the Session.
Nov 23 03:49:16 localhost systemd[35892]: Reached target Exit the Session.
Nov 23 03:49:16 localhost systemd[1]: user@1003.service: Deactivated successfully.
Nov 23 03:49:16 localhost systemd[1]: Stopped User Manager for UID 1003.
Nov 23 03:49:16 localhost systemd[1]: user@1003.service: Consumed 4.292s CPU time.
Nov 23 03:49:16 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003...
Nov 23 03:49:16 localhost systemd[1]: run-user-1003.mount: Deactivated successfully.
Nov 23 03:49:16 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Nov 23 03:49:16 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003.
Nov 23 03:49:16 localhost systemd[1]: Removed slice User Slice of UID 1003.
Nov 23 03:49:16 localhost systemd[1]: user-1003.slice: Consumed 7min 17.409s CPU time.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:49:17 localhost systemd[1]: tmp-crun.adfz9U.mount: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98729]: 2025-11-23 08:49:17.189798109 +0000 UTC m=+0.082746404 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-type=git)
Nov 23 03:49:17 localhost podman[98729]: 2025-11-23 08:49:17.206728296 +0000 UTC m=+0.099676591 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:49:17 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98726]: 2025-11-23 08:49:17.208327009 +0000 UTC m=+0.104855440 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:49:17 localhost podman[98726]: 2025-11-23 08:49:17.287857455 +0000 UTC m=+0.184385746 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true)
Nov 23 03:49:17 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98727]: 2025-11-23 08:49:17.301725379 +0000 UTC m=+0.197498740 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=)
Nov 23 03:49:17 localhost podman[98725]: 2025-11-23 08:49:17.259925102 +0000 UTC m=+0.157077989 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:49:32Z, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:49:17 localhost podman[98727]: 2025-11-23 08:49:17.335049509 +0000 UTC m=+0.230822870 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team)
Nov 23 03:49:17 localhost podman[98725]: 2025-11-23 08:49:17.346096427 +0000 UTC m=+0.243249364 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:49:17 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98730]: 2025-11-23 08:49:17.356872797 +0000 UTC m=+0.246907453 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:49:17 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98730]: 2025-11-23 08:49:17.413561847 +0000 UTC m=+0.303596443 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4)
Nov 23 03:49:17 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:49:17 localhost podman[98728]: 2025-11-23 08:49:17.418040767 +0000 UTC m=+0.307536878 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true)
Nov 23 03:49:17 localhost podman[98728]: 2025-11-23 08:49:17.498092837 +0000 UTC m=+0.387588918 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:49:17 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:49:23 localhost podman[98939]: 2025-11-23 08:49:23.1855705 +0000 UTC m=+0.085611131 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Nov 23 03:49:23 localhost podman[98939]: 2025-11-23 08:49:23.567136204 +0000 UTC m=+0.467176895 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:49:23 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:49:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:49:25 localhost podman[98964]: 2025-11-23 08:49:25.18295318 +0000 UTC m=+0.080245046 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Nov 23 03:49:25 localhost podman[98963]: 2025-11-23 08:49:25.240568155 +0000 UTC m=+0.139313240 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.buildah.version=1.41.4)
Nov 23 03:49:25 localhost podman[98963]: 2025-11-23 08:49:25.288374695 +0000 UTC m=+0.187119740 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible)
Nov 23 03:49:25 localhost podman[98963]: unhealthy
Nov 23 03:49:25 localhost podman[98962]: 2025-11-23 08:49:25.299092724 +0000 UTC m=+0.201087986 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, version=17.1.12, container_name=ovn_metadata_agent, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 03:49:25 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:49:25 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:49:25 localhost podman[98962]: 2025-11-23 08:49:25.344170081 +0000 UTC m=+0.246165343 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Nov 23 03:49:25 localhost podman[98962]: unhealthy
Nov 23 03:49:25 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:49:25 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:49:25 localhost podman[98964]: 2025-11-23 08:49:25.381879637 +0000 UTC m=+0.279171553 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, release=1761123044, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12)
Nov 23 03:49:25 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:49:26 localhost systemd[1]: tmp-crun.MfeXo8.mount: Deactivated successfully.
Nov 23 03:49:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:49:34 localhost recover_tripleo_nova_virtqemud[99032]: 61733
Nov 23 03:49:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:49:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:49:48 localhost podman[99034]: 2025-11-23 08:49:48.270952417 +0000 UTC m=+0.162534246 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vcs-type=git, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:49:48 localhost podman[99033]: 2025-11-23 08:49:48.216722924 +0000 UTC m=+0.109780763 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:49:48 localhost podman[99047]: 2025-11-23 08:49:48.315719765 +0000 UTC m=+0.194389866 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true)
Nov 23 03:49:48 localhost podman[99034]: 2025-11-23 08:49:48.32815962 +0000 UTC m=+0.219741489 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_compute)
Nov 23 03:49:48 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:49:48 localhost podman[99054]: 2025-11-23 08:49:48.242988483 +0000 UTC m=+0.112626560 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, distribution-scope=public, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:49:48 localhost podman[99033]: 2025-11-23 08:49:48.353038652 +0000 UTC m=+0.246096471 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z)
Nov 23 03:49:48 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:49:48 localhost podman[99054]: 2025-11-23 08:49:48.377254966 +0000 UTC m=+0.246893083 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, release=1761123044, architecture=x86_64, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Nov 23 03:49:48 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:49:48 localhost podman[99047]: 2025-11-23 08:49:48.404193122 +0000 UTC m=+0.282863253 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, managed_by=tripleo_ansible)
Nov 23 03:49:48 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:49:48 localhost podman[99036]: 2025-11-23 08:49:48.476197295 +0000 UTC m=+0.356262003 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Nov 23 03:49:48 localhost podman[99036]: 2025-11-23 08:49:48.487951702 +0000 UTC m=+0.368016410 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4)
Nov 23 03:49:48 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:49:48 localhost podman[99035]: 2025-11-23 08:49:48.576785159 +0000 UTC m=+0.462700545 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64)
Nov 23 03:49:48 localhost podman[99035]: 2025-11-23 08:49:48.637183599 +0000 UTC m=+0.523098965 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=)
Nov 23 03:49:48 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:49:54 localhost podman[99174]: 2025-11-23 08:49:54.178822975 +0000 UTC m=+0.083992717 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Nov 23 03:49:54 localhost podman[99174]: 2025-11-23 08:49:54.552041254 +0000 UTC m=+0.457210956 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:49:54 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:49:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:49:56 localhost systemd[1]: tmp-crun.hKJBOG.mount: Deactivated successfully.
Nov 23 03:49:56 localhost podman[99199]: 2025-11-23 08:49:56.19752047 +0000 UTC m=+0.099767382 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd)
Nov 23 03:49:56 localhost podman[99198]: 2025-11-23 08:49:56.236186424 +0000 UTC m=+0.138887378 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:49:56 localhost podman[99198]: 2025-11-23 08:49:56.28347316 +0000 UTC m=+0.186174174 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:49:56 localhost podman[99198]: unhealthy
Nov 23 03:49:56 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:49:56 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:49:56 localhost podman[99197]: 2025-11-23 08:49:56.289123152 +0000 UTC m=+0.193293526 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:49:56 localhost podman[99197]: 2025-11-23 08:49:56.369479451 +0000 UTC m=+0.273649875 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044)
Nov 23 03:49:56 localhost podman[99197]: unhealthy
Nov 23 03:49:56 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:49:56 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:49:56 localhost podman[99199]: 2025-11-23 08:49:56.473968569 +0000 UTC m=+0.376215471 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Nov 23 03:49:56 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:50:19 localhost podman[99263]: 2025-11-23 08:50:19.219440033 +0000 UTC m=+0.118303172 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4)
Nov 23 03:50:19 localhost podman[99263]: 2025-11-23 08:50:19.22820485 +0000 UTC m=+0.127067989 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, architecture=x86_64, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Nov 23 03:50:19 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:50:19 localhost systemd[1]: tmp-crun.o9yyPS.mount: Deactivated successfully.
Nov 23 03:50:19 localhost podman[99272]: 2025-11-23 08:50:19.32862363 +0000 UTC m=+0.213246865 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:50:19 localhost podman[99272]: 2025-11-23 08:50:19.365049983 +0000 UTC m=+0.249673048 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Nov 23 03:50:19 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:50:19 localhost podman[99278]: 2025-11-23 08:50:19.37716143 +0000 UTC m=+0.260019497 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z)
Nov 23 03:50:19 localhost podman[99266]: 2025-11-23 08:50:19.424844217 +0000 UTC m=+0.314081496 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 23 03:50:19 localhost podman[99266]: 2025-11-23 08:50:19.43832104 +0000 UTC m=+0.327558369 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Nov 23 03:50:19 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:50:19 localhost podman[99278]: 2025-11-23 08:50:19.488597707 +0000 UTC m=+0.371455814 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Nov 23 03:50:19 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:50:19 localhost podman[99265]: 2025-11-23 08:50:19.581900905 +0000 UTC m=+0.474744702 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Nov 23 03:50:19 localhost podman[99265]: 2025-11-23 08:50:19.614774211 +0000 UTC m=+0.507617958 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z)
Nov 23 03:50:19 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:50:19 localhost podman[99264]: 2025-11-23 08:50:19.634069592 +0000 UTC m=+0.529932081 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, container_name=nova_compute)
Nov 23 03:50:19 localhost podman[99264]: 2025-11-23 08:50:19.69400248 +0000 UTC m=+0.589865009 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:50:19 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:50:25 localhost systemd[1]: tmp-crun.r7WGSA.mount: Deactivated successfully.
Nov 23 03:50:25 localhost podman[99474]: 2025-11-23 08:50:25.188090415 +0000 UTC m=+0.088450018 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team)
Nov 23 03:50:25 localhost podman[99474]: 2025-11-23 08:50:25.557438681 +0000 UTC m=+0.457798334 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:50:25 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:50:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:50:27 localhost systemd[1]: tmp-crun.GksVHg.mount: Deactivated successfully.
Nov 23 03:50:27 localhost systemd[1]: tmp-crun.1PdJoM.mount: Deactivated successfully.
Nov 23 03:50:27 localhost podman[99495]: 2025-11-23 08:50:27.192919797 +0000 UTC m=+0.093057672 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:50:27 localhost podman[99495]: 2025-11-23 08:50:27.208065705 +0000 UTC m=+0.108203570 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team)
Nov 23 03:50:27 localhost podman[99495]: unhealthy
Nov 23 03:50:27 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:50:27 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:50:27 localhost podman[99497]: 2025-11-23 08:50:27.159421923 +0000 UTC m=+0.056721672 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:50:27 localhost podman[99496]: 2025-11-23 08:50:27.275303779 +0000 UTC m=+0.175341512 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:50:27 localhost podman[99496]: 2025-11-23 08:50:27.318172776 +0000 UTC m=+0.218210529 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:50:27 localhost podman[99496]: unhealthy
Nov 23 03:50:27 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:50:27 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:50:27 localhost podman[99497]: 2025-11-23 08:50:27.343854039 +0000 UTC m=+0.241153828 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, architecture=x86_64)
Nov 23 03:50:27 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:50:28 localhost sshd[99562]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:50:28 localhost sshd[99563]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:50:50 localhost podman[99568]: 2025-11-23 08:50:50.198804701 +0000 UTC m=+0.089876651 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:50:50 localhost podman[99568]: 2025-11-23 08:50:50.210961135 +0000 UTC m=+0.102033115 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1)
Nov 23 03:50:50 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:50:50 localhost podman[99565]: 2025-11-23 08:50:50.256278455 +0000 UTC m=+0.157284150 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, architecture=x86_64, container_name=logrotate_crond, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:50:50 localhost podman[99566]: 2025-11-23 08:50:50.310708109 +0000 UTC m=+0.210502051 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_id=tripleo_step5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Nov 23 03:50:50 localhost podman[99579]: 2025-11-23 08:50:50.366710065 +0000 UTC m=+0.252761110 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4)
Nov 23 03:50:50 localhost podman[99566]: 2025-11-23 08:50:50.376152526 +0000 UTC m=+0.275946458 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64)
Nov 23 03:50:50 localhost podman[99579]: 2025-11-23 08:50:50.402825669 +0000 UTC m=+0.288876754 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:50:50 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:50:50 localhost podman[99567]: 2025-11-23 08:50:50.415271591 +0000 UTC m=+0.307345488 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:50:50 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:50:50 localhost podman[99567]: 2025-11-23 08:50:50.448523819 +0000 UTC m=+0.340597666 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:50:50 localhost podman[99580]: 2025-11-23 08:50:50.473477125 +0000 UTC m=+0.354043184 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:50:50 localhost podman[99565]: 2025-11-23 08:50:50.492200976 +0000 UTC m=+0.393206741 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Nov 23 03:50:50 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:50:50 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:50:50 localhost podman[99580]: 2025-11-23 08:50:50.534395752 +0000 UTC m=+0.414961841 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:50:50 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:50:56 localhost podman[99697]: 2025-11-23 08:50:56.179777971 +0000 UTC m=+0.089513371 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Nov 23 03:50:56 localhost podman[99697]: 2025-11-23 08:50:56.553032718 +0000 UTC m=+0.462768118 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target)
Nov 23 03:50:56 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:50:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:50:58 localhost podman[99720]: 2025-11-23 08:50:58.179663223 +0000 UTC m=+0.085543086 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team)
Nov 23 03:50:58 localhost podman[99720]: 2025-11-23 08:50:58.200894239 +0000 UTC m=+0.106774112 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Nov 23 03:50:58 localhost podman[99720]: unhealthy
Nov 23 03:50:58 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:50:58 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:50:58 localhost podman[99722]: 2025-11-23 08:50:58.249128728 +0000 UTC m=+0.150746276 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public)
Nov 23 03:50:58 localhost podman[99721]: 2025-11-23 08:50:58.337827066 +0000 UTC m=+0.241837818 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 03:50:58 localhost podman[99721]: 2025-11-23 08:50:58.359994538 +0000 UTC m=+0.264005290 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1761123044, container_name=ovn_controller, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Nov 23 03:50:58 localhost podman[99721]: unhealthy
Nov 23 03:50:58 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:50:58 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:50:58 localhost podman[99722]: 2025-11-23 08:50:58.470601842 +0000 UTC m=+0.372219370 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr)
Nov 23 03:50:58 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:51:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:51:14 localhost recover_tripleo_nova_virtqemud[99785]: 61733
Nov 23 03:51:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:51:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:51:21 localhost podman[99786]: 2025-11-23 08:51:21.209697743 +0000 UTC m=+0.113798520 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-cron)
Nov 23 03:51:21 localhost podman[99788]: 2025-11-23 08:51:21.258915597 +0000 UTC m=+0.159895121 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 03:51:21 localhost podman[99786]: 2025-11-23 08:51:21.272108719 +0000 UTC m=+0.176209576 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond)
Nov 23 03:51:21 localhost podman[99787]: 2025-11-23 08:51:21.309577949 +0000 UTC m=+0.210316247 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Nov 23 03:51:21 localhost podman[99795]: 2025-11-23 08:51:21.279922498 +0000 UTC m=+0.171357496 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Nov 23 03:51:21 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:51:21 localhost podman[99787]: 2025-11-23 08:51:21.347820891 +0000 UTC m=+0.248559139 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:51:21 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:51:21 localhost podman[99788]: 2025-11-23 08:51:21.361695532 +0000 UTC m=+0.262675156 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:51:21 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:51:21 localhost podman[99795]: 2025-11-23 08:51:21.416475705 +0000 UTC m=+0.307910723 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:51:21 localhost podman[99806]: 2025-11-23 08:51:21.425980208 +0000 UTC m=+0.313603085 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, version=17.1.12, architecture=x86_64, release=1761123044, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Nov 23 03:51:21 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:51:21 localhost podman[99789]: 2025-11-23 08:51:21.472032997 +0000 UTC m=+0.366402444 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64)
Nov 23 03:51:21 localhost podman[99806]: 2025-11-23 08:51:21.490224944 +0000 UTC m=+0.377847851 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Nov 23 03:51:21 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:51:21 localhost podman[99789]: 2025-11-23 08:51:21.513000732 +0000 UTC m=+0.407370259 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:51:21 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:51:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:51:27 localhost podman[99995]: 2025-11-23 08:51:27.177560043 +0000 UTC m=+0.082205376 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:51:27 localhost podman[99995]: 2025-11-23 08:51:27.581121628 +0000 UTC m=+0.485766921 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com)
Nov 23 03:51:27 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:51:29 localhost podman[100019]: 2025-11-23 08:51:29.18550965 +0000 UTC m=+0.079643238 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Nov 23 03:51:29 localhost podman[100017]: 2025-11-23 08:51:29.238180467 +0000 UTC m=+0.141235513 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Nov 23 03:51:29 localhost podman[100017]: 2025-11-23 08:51:29.272639107 +0000 UTC m=+0.175694153 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Nov 23 03:51:29 localhost podman[100017]: unhealthy
Nov 23 03:51:29 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:51:29 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:51:29 localhost podman[100018]: 2025-11-23 08:51:29.293575645 +0000 UTC m=+0.190355422 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, name=rhosp17/openstack-ovn-controller)
Nov 23 03:51:29 localhost podman[100018]: 2025-11-23 08:51:29.331377735 +0000 UTC m=+0.228157592 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z)
Nov 23 03:51:29 localhost podman[100018]: unhealthy
Nov 23 03:51:29 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:51:29 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:51:29 localhost podman[100019]: 2025-11-23 08:51:29.427210584 +0000 UTC m=+0.321344192 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Nov 23 03:51:29 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:51:30 localhost systemd[1]: tmp-crun.fXZy7c.mount: Deactivated successfully.
Nov 23 03:51:49 localhost sshd[100082]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:51:52 localhost podman[100101]: 2025-11-23 08:51:52.215824657 +0000 UTC m=+0.103555006 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64)
Nov 23 03:51:52 localhost podman[100101]: 2025-11-23 08:51:52.243859876 +0000 UTC m=+0.131590215 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, vcs-type=git, distribution-scope=public)
Nov 23 03:51:52 localhost podman[100085]: 2025-11-23 08:51:52.266179922 +0000 UTC m=+0.169862957 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 23 03:51:52 localhost podman[100085]: 2025-11-23 08:51:52.295016862 +0000 UTC m=+0.198699917 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute)
Nov 23 03:51:52 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:51:52 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:51:52 localhost podman[100098]: 2025-11-23 08:51:52.422205829 +0000 UTC m=+0.316642467 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Nov 23 03:51:52 localhost podman[100098]: 2025-11-23 08:51:52.431885947 +0000 UTC m=+0.326322575 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Nov 23 03:51:52 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:51:52 localhost podman[100086]: 2025-11-23 08:51:52.47542503 +0000 UTC m=+0.375061397 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:51:52 localhost podman[100090]: 2025-11-23 08:51:52.530658194 +0000 UTC m=+0.424061934 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git)
Nov 23 03:51:52 localhost podman[100086]: 2025-11-23 08:51:52.531647141 +0000 UTC m=+0.431283518 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:51:52 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:51:52 localhost podman[100090]: 2025-11-23 08:51:52.609671705 +0000 UTC m=+0.503075445 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:51:52 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:51:52 localhost podman[100084]: 2025-11-23 08:51:52.582679463 +0000 UTC m=+0.489301267 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, container_name=logrotate_crond, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:51:52 localhost podman[100084]: 2025-11-23 08:51:52.662112044 +0000 UTC m=+0.568733788 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:51:52 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:51:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:51:58 localhost podman[100225]: 2025-11-23 08:51:58.184244151 +0000 UTC m=+0.088586686 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:51:58 localhost podman[100225]: 2025-11-23 08:51:58.548695614 +0000 UTC m=+0.453038169 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:51:58 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:52:00 localhost podman[100251]: 2025-11-23 08:52:00.167874813 +0000 UTC m=+0.071910272 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Nov 23 03:52:00 localhost podman[100250]: 2025-11-23 08:52:00.187757974 +0000 UTC m=+0.088991428 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 23 03:52:00 localhost podman[100250]: 2025-11-23 08:52:00.228647335 +0000 UTC m=+0.129880839 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Nov 23 03:52:00 localhost systemd[1]: tmp-crun.GhXFfj.mount: Deactivated successfully.
Nov 23 03:52:00 localhost podman[100250]: unhealthy
Nov 23 03:52:00 localhost podman[100249]: 2025-11-23 08:52:00.242593037 +0000 UTC m=+0.147699395 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vcs-type=git, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:52:00 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:52:00 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:52:00 localhost podman[100249]: 2025-11-23 08:52:00.260090364 +0000 UTC m=+0.165196732 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Nov 23 03:52:00 localhost podman[100249]: unhealthy
Nov 23 03:52:00 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:52:00 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:52:00 localhost podman[100251]: 2025-11-23 08:52:00.370875423 +0000 UTC m=+0.274910862 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd)
Nov 23 03:52:00 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 03:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:52:23 localhost systemd[1]: tmp-crun.522Pcx.mount: Deactivated successfully.
Nov 23 03:52:23 localhost podman[100332]: 2025-11-23 08:52:23.129450143 +0000 UTC m=+0.088392951 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Nov 23 03:52:23 localhost podman[100329]: 2025-11-23 08:52:23.160897592 +0000 UTC m=+0.125719808 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, distribution-scope=public)
Nov 23 03:52:23 localhost podman[100329]: 2025-11-23 08:52:23.171892596 +0000 UTC m=+0.136714832 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Nov 23 03:52:23 localhost podman[100338]: 2025-11-23 08:52:23.140996611 +0000 UTC m=+0.092073060 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:52:23 localhost podman[100352]: 2025-11-23 08:52:23.196911094 +0000 UTC m=+0.142590519 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:52:23 localhost podman[100352]: 2025-11-23 08:52:23.244419493 +0000 UTC m=+0.190098958 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Nov 23 03:52:23 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:52:23 localhost podman[100345]: 2025-11-23 08:52:23.265397963 +0000 UTC m=+0.210658096 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Nov 23 03:52:23 localhost podman[100345]: 2025-11-23 08:52:23.274876256 +0000 UTC m=+0.220136399 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64)
Nov 23 03:52:23 localhost podman[100338]: 2025-11-23 08:52:23.275998435 +0000 UTC m=+0.227074914 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, container_name=collectd, release=1761123044, config_id=tripleo_step3, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:52:23 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:52:23 localhost podman[100331]: 2025-11-23 08:52:23.323507984 +0000 UTC m=+0.286462830 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team)
Nov 23 03:52:23 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:52:23 localhost podman[100331]: 2025-11-23 08:52:23.352864688 +0000 UTC m=+0.315819554 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, tcib_managed=true, vcs-type=git, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:52:23 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:52:23 localhost podman[100332]: 2025-11-23 08:52:23.381843022 +0000 UTC m=+0.340785890 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Nov 23 03:52:23 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:52:23 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:52:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:52:29 localhost podman[100522]: 2025-11-23 08:52:29.170210568 +0000 UTC m=+0.075352542 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.buildah.version=1.41.4)
Nov 23 03:52:29 localhost podman[100522]: 2025-11-23 08:52:29.548126719 +0000 UTC m=+0.453268653 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:52:29 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:52:31 localhost systemd[1]: tmp-crun.mcw3E4.mount: Deactivated successfully.
Nov 23 03:52:31 localhost podman[100547]: 2025-11-23 08:52:31.193235219 +0000 UTC m=+0.095489802 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:52:31 localhost systemd[1]: tmp-crun.sqHN0G.mount: Deactivated successfully.
Nov 23 03:52:31 localhost podman[100546]: 2025-11-23 08:52:31.288027469 +0000 UTC m=+0.192591233 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Nov 23 03:52:31 localhost podman[100545]: 2025-11-23 08:52:31.318397811 +0000 UTC m=+0.226567612 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Nov 23 03:52:31 localhost podman[100545]: 2025-11-23 08:52:31.332022704 +0000 UTC m=+0.240192535 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, container_name=ovn_metadata_agent, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:52:31 localhost podman[100545]: unhealthy
Nov 23 03:52:31 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:52:31 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:52:31 localhost podman[100546]: 2025-11-23 08:52:31.384738112 +0000 UTC m=+0.289301926 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Nov 23 03:52:31 localhost podman[100547]: 2025-11-23 08:52:31.390143247 +0000 UTC m=+0.292397830 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Nov 23 03:52:31 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:52:31 localhost podman[100546]: unhealthy
Nov 23 03:52:31 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:52:31 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:52:54 localhost systemd[1]: tmp-crun.vqoIdN.mount: Deactivated successfully.
Nov 23 03:52:54 localhost podman[100627]: 2025-11-23 08:52:54.213657431 +0000 UTC m=+0.102277982 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3)
Nov 23 03:52:54 localhost podman[100637]: 2025-11-23 08:52:54.264901389 +0000 UTC m=+0.143551464 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc.)
Nov 23 03:52:54 localhost podman[100615]: 2025-11-23 08:52:54.171991228 +0000 UTC m=+0.076497773 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:52:54 localhost podman[100616]: 2025-11-23 08:52:54.246272272 +0000 UTC m=+0.144388457 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Nov 23 03:52:54 localhost podman[100615]: 2025-11-23 08:52:54.304885758 +0000 UTC m=+0.209392283 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:52:54 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:52:54 localhost podman[100617]: 2025-11-23 08:52:54.314684909 +0000 UTC m=+0.209037753 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:52:54 localhost podman[100637]: 2025-11-23 08:52:54.319365304 +0000 UTC m=+0.198015409 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Nov 23 03:52:54 localhost podman[100627]: 2025-11-23 08:52:54.327385578 +0000 UTC m=+0.216006179 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:52:54 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:52:54 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:52:54 localhost podman[100623]: 2025-11-23 08:52:54.358825908 +0000 UTC m=+0.246071362 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:52:54 localhost podman[100623]: 2025-11-23 08:52:54.372103192 +0000 UTC m=+0.259348656 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com)
Nov 23 03:52:54 localhost podman[100616]: 2025-11-23 08:52:54.380106875 +0000 UTC m=+0.278223020 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_compute, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5)
Nov 23 03:52:54 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:52:54 localhost podman[100617]: 2025-11-23 08:52:54.39146432 +0000 UTC m=+0.285817154 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:52:54 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:52:54 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:53:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:53:00 localhost podman[100745]: 2025-11-23 08:53:00.16949577 +0000 UTC m=+0.076585616 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:53:00 localhost podman[100745]: 2025-11-23 08:53:00.510989078 +0000 UTC m=+0.418078914 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:53:00 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:53:02 localhost podman[100769]: 2025-11-23 08:53:02.181701701 +0000 UTC m=+0.084445266 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc.)
Nov 23 03:53:02 localhost podman[100769]: 2025-11-23 08:53:02.219056769 +0000 UTC m=+0.121800314 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:53:02 localhost podman[100769]: unhealthy
Nov 23 03:53:02 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:53:02 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:53:02 localhost podman[100770]: 2025-11-23 08:53:02.292318275 +0000 UTC m=+0.190025465 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, distribution-scope=public, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Nov 23 03:53:02 localhost podman[100768]: 2025-11-23 08:53:02.294673738 +0000 UTC m=+0.199169429 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:53:02 localhost podman[100768]: 2025-11-23 08:53:02.378019603 +0000 UTC m=+0.282515304 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:53:02 localhost podman[100768]: unhealthy
Nov 23 03:53:02 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:53:02 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:53:02 localhost podman[100770]: 2025-11-23 08:53:02.479927655 +0000 UTC m=+0.377634845 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64)
Nov 23 03:53:02 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:53:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:53:24 localhost recover_tripleo_nova_virtqemud[100834]: 61733
Nov 23 03:53:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:53:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:53:25 localhost systemd[1]: tmp-crun.YCp6pQ.mount: Deactivated successfully.
Nov 23 03:53:25 localhost podman[100836]: 2025-11-23 08:53:25.237094397 +0000 UTC m=+0.139324320 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=nova_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Nov 23 03:53:25 localhost podman[100836]: 2025-11-23 08:53:25.261782877 +0000 UTC m=+0.164012810 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-type=git)
Nov 23 03:53:25 localhost podman[100835]: 2025-11-23 08:53:25.229403883 +0000 UTC m=+0.132675284 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Nov 23 03:53:25 localhost podman[100854]: 2025-11-23 08:53:25.293064292 +0000 UTC m=+0.176809082 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Nov 23 03:53:25 localhost podman[100855]: 2025-11-23 08:53:25.304810906 +0000 UTC m=+0.188351901 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Nov 23 03:53:25 localhost podman[100835]: 2025-11-23 08:53:25.311981697 +0000 UTC m=+0.215253098 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:53:25 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:53:25 localhost podman[100839]: 2025-11-23 08:53:25.343966012 +0000 UTC m=+0.235131529 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible)
Nov 23 03:53:25 localhost podman[100854]: 2025-11-23 08:53:25.357519163 +0000 UTC m=+0.241263943 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:53:25 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:53:25 localhost podman[100855]: 2025-11-23 08:53:25.378058212 +0000 UTC m=+0.261599227 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true)
Nov 23 03:53:25 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:53:25 localhost podman[100839]: 2025-11-23 08:53:25.433205274 +0000 UTC m=+0.324370791 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, distribution-scope=public)
Nov 23 03:53:25 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:53:25 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:53:25 localhost podman[100837]: 2025-11-23 08:53:25.510748405 +0000 UTC m=+0.413183533 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:53:25 localhost podman[100837]: 2025-11-23 08:53:25.56413229 +0000 UTC m=+0.466567418 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible)
Nov 23 03:53:25 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:53:31 localhost podman[101047]: 2025-11-23 08:53:31.176579319 +0000 UTC m=+0.079076823 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:53:31 localhost podman[101047]: 2025-11-23 08:53:31.581133172 +0000 UTC m=+0.483630636 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1)
Nov 23 03:53:31 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:53:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:53:33 localhost podman[101069]: 2025-11-23 08:53:33.178226669 +0000 UTC m=+0.083230374 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:53:33 localhost podman[101069]: 2025-11-23 08:53:33.218093513 +0000 UTC m=+0.123097208 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:53:33 localhost podman[101069]: unhealthy
Nov 23 03:53:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:53:33 localhost podman[101068]: 2025-11-23 08:53:33.22920306 +0000 UTC m=+0.135872259 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public)
Nov 23 03:53:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:53:33 localhost podman[101068]: 2025-11-23 08:53:33.272079695 +0000 UTC m=+0.178748894 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Nov 23 03:53:33 localhost podman[101068]: unhealthy
Nov 23 03:53:33 localhost systemd[1]: tmp-crun.CqPM2s.mount: Deactivated successfully.
Nov 23 03:53:33 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:53:33 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:53:33 localhost podman[101070]: 2025-11-23 08:53:33.295075169 +0000 UTC m=+0.194219358 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Nov 23 03:53:33 localhost podman[101070]: 2025-11-23 08:53:33.515102234 +0000 UTC m=+0.414246423 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044)
Nov 23 03:53:33 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:53:56 localhost systemd[1]: tmp-crun.gfZeic.mount: Deactivated successfully.
Nov 23 03:53:56 localhost podman[101134]: 2025-11-23 08:53:56.171713143 +0000 UTC m=+0.078580509 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step5)
Nov 23 03:53:56 localhost podman[101136]: 2025-11-23 08:53:56.18621528 +0000 UTC m=+0.084257570 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:53:56 localhost podman[101137]: 2025-11-23 08:53:56.218861942 +0000 UTC m=+0.115644938 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:53:56 localhost podman[101136]: 2025-11-23 08:53:56.223841526 +0000 UTC m=+0.121883776 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, vcs-type=git, version=17.1.12, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:53:56 localhost podman[101137]: 2025-11-23 08:53:56.233830132 +0000 UTC m=+0.130613148 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4)
Nov 23 03:53:56 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:53:56 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:53:56 localhost podman[101148]: 2025-11-23 08:53:56.28357355 +0000 UTC m=+0.177673565 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4)
Nov 23 03:53:56 localhost podman[101135]: 2025-11-23 08:53:56.333392741 +0000 UTC m=+0.235448978 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4)
Nov 23 03:53:56 localhost podman[101148]: 2025-11-23 08:53:56.337992473 +0000 UTC m=+0.232092578 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12)
Nov 23 03:53:56 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:53:56 localhost podman[101135]: 2025-11-23 08:53:56.371945179 +0000 UTC m=+0.274001446 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public)
Nov 23 03:53:56 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:53:56 localhost podman[101134]: 2025-11-23 08:53:56.385687406 +0000 UTC m=+0.292554812 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:53:56 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:53:56 localhost podman[101133]: 2025-11-23 08:53:56.386631712 +0000 UTC m=+0.293956670 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Nov 23 03:53:56 localhost podman[101133]: 2025-11-23 08:53:56.466640928 +0000 UTC m=+0.373965846 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:53:56 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:53:57 localhost systemd[1]: tmp-crun.CouFl4.mount: Deactivated successfully.
Nov 23 03:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:54:02 localhost podman[101267]: 2025-11-23 08:54:02.171428143 +0000 UTC m=+0.078509097 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:54:02 localhost podman[101267]: 2025-11-23 08:54:02.545382088 +0000 UTC m=+0.452463062 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute)
Nov 23 03:54:02 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:54:04 localhost podman[101291]: 2025-11-23 08:54:04.186562333 +0000 UTC m=+0.092423389 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_controller)
Nov 23 03:54:04 localhost systemd[1]: tmp-crun.mPKsvY.mount: Deactivated successfully.
Nov 23 03:54:04 localhost podman[101291]: 2025-11-23 08:54:04.23102952 +0000 UTC m=+0.136890606 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc.)
Nov 23 03:54:04 localhost podman[101291]: unhealthy
Nov 23 03:54:04 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:54:04 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:54:04 localhost podman[101290]: 2025-11-23 08:54:04.157577299 +0000 UTC m=+0.068520631 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Nov 23 03:54:04 localhost podman[101290]: 2025-11-23 08:54:04.287111178 +0000 UTC m=+0.198054520 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible)
Nov 23 03:54:04 localhost podman[101292]: 2025-11-23 08:54:04.235793378 +0000 UTC m=+0.138521931 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:54:04 localhost podman[101290]: unhealthy
Nov 23 03:54:04 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:54:04 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:54:04 localhost podman[101292]: 2025-11-23 08:54:04.431858513 +0000 UTC m=+0.334587046 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:54:04 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:54:27 localhost systemd[1]: tmp-crun.AP4KTu.mount: Deactivated successfully.
Nov 23 03:54:27 localhost podman[101356]: 2025-11-23 08:54:27.19665158 +0000 UTC m=+0.098187133 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container)
Nov 23 03:54:27 localhost podman[101356]: 2025-11-23 08:54:27.202126806 +0000 UTC m=+0.103662379 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public)
Nov 23 03:54:27 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:54:27 localhost podman[101358]: 2025-11-23 08:54:27.243589094 +0000 UTC m=+0.134661577 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z)
Nov 23 03:54:27 localhost podman[101357]: 2025-11-23 08:54:27.29329712 +0000 UTC m=+0.192640274 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:54:27 localhost podman[101357]: 2025-11-23 08:54:27.316519421 +0000 UTC m=+0.215862555 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Nov 23 03:54:27 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:54:27 localhost podman[101369]: 2025-11-23 08:54:27.399784414 +0000 UTC m=+0.287396875 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 03:54:27 localhost podman[101358]: 2025-11-23 08:54:27.420160049 +0000 UTC m=+0.311232582 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:54:27 localhost podman[101369]: 2025-11-23 08:54:27.430448293 +0000 UTC m=+0.318060814 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 03:54:27 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:54:27 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:54:27 localhost podman[101376]: 2025-11-23 08:54:27.503787452 +0000 UTC m=+0.385622559 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z)
Nov 23 03:54:27 localhost podman[101370]: 2025-11-23 08:54:27.557056644 +0000 UTC m=+0.439705092 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:54:27 localhost podman[101370]: 2025-11-23 08:54:27.565919921 +0000 UTC m=+0.448568269 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:54:27 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:54:27 localhost podman[101376]: 2025-11-23 08:54:27.58010544 +0000 UTC m=+0.461940527 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 03:54:27 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:54:32 localhost sshd[101613]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:54:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:54:33 localhost podman[101615]: 2025-11-23 08:54:33.189668141 +0000 UTC m=+0.097608367 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com)
Nov 23 03:54:33 localhost podman[101615]: 2025-11-23 08:54:33.567021308 +0000 UTC m=+0.474961544 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 03:54:33 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:54:35 localhost podman[101638]: 2025-11-23 08:54:35.18862787 +0000 UTC m=+0.095395958 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Nov 23 03:54:35 localhost podman[101638]: 2025-11-23 08:54:35.22386405 +0000 UTC m=+0.130632098 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044)
Nov 23 03:54:35 localhost systemd[1]: tmp-crun.pxRtjZ.mount: Deactivated successfully.
Nov 23 03:54:35 localhost podman[101638]: unhealthy
Nov 23 03:54:35 localhost podman[101639]: 2025-11-23 08:54:35.232446049 +0000 UTC m=+0.137082241 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z)
Nov 23 03:54:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:54:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:54:35 localhost podman[101639]: 2025-11-23 08:54:35.248790176 +0000 UTC m=+0.153426378 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Nov 23 03:54:35 localhost podman[101639]: unhealthy
Nov 23 03:54:35 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:54:35 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:54:35 localhost podman[101640]: 2025-11-23 08:54:35.300663802 +0000 UTC m=+0.198882242 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12)
Nov 23 03:54:35 localhost podman[101640]: 2025-11-23 08:54:35.502963203 +0000 UTC m=+0.401181633 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, container_name=metrics_qdr, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Nov 23 03:54:35 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:54:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:54:54 localhost recover_tripleo_nova_virtqemud[101710]: 61733
Nov 23 03:54:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:54:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:54:58 localhost podman[101711]: 2025-11-23 08:54:58.193808766 +0000 UTC m=+0.098184112 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Nov 23 03:54:58 localhost systemd[1]: tmp-crun.UBZTCn.mount: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101724]: 2025-11-23 08:54:58.214025326 +0000 UTC m=+0.103627258 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Nov 23 03:54:58 localhost podman[101724]: 2025-11-23 08:54:58.222170034 +0000 UTC m=+0.111771946 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, version=17.1.12, release=1761123044, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1)
Nov 23 03:54:58 localhost podman[101714]: 2025-11-23 08:54:58.264165665 +0000 UTC m=+0.159747816 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044)
Nov 23 03:54:58 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101714]: 2025-11-23 08:54:58.35163637 +0000 UTC m=+0.247218471 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Nov 23 03:54:58 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101733]: 2025-11-23 08:54:58.327103405 +0000 UTC m=+0.210102191 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:54:58 localhost podman[101713]: 2025-11-23 08:54:58.301914422 +0000 UTC m=+0.196476957 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container)
Nov 23 03:54:58 localhost podman[101733]: 2025-11-23 08:54:58.40591154 +0000 UTC m=+0.288910316 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:54:58 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101711]: 2025-11-23 08:54:58.429099269 +0000 UTC m=+0.333474605 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:54:58 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101712]: 2025-11-23 08:54:58.531365579 +0000 UTC m=+0.434018270 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, container_name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container)
Nov 23 03:54:58 localhost podman[101712]: 2025-11-23 08:54:58.558829843 +0000 UTC m=+0.461482554 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute)
Nov 23 03:54:58 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:54:58 localhost podman[101713]: 2025-11-23 08:54:58.611289524 +0000 UTC m=+0.505852109 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Nov 23 03:54:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:55:04 localhost systemd[1]: tmp-crun.kHBrOe.mount: Deactivated successfully.
Nov 23 03:55:04 localhost podman[101848]: 2025-11-23 08:55:04.187056023 +0000 UTC m=+0.095782619 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:55:04 localhost podman[101848]: 2025-11-23 08:55:04.597112842 +0000 UTC m=+0.505839478 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:55:04 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:55:06 localhost podman[101871]: 2025-11-23 08:55:06.172457569 +0000 UTC m=+0.080875461 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git)
Nov 23 03:55:06 localhost podman[101872]: 2025-11-23 08:55:06.240394253 +0000 UTC m=+0.141189231 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, container_name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Nov 23 03:55:06 localhost podman[101872]: 2025-11-23 08:55:06.256347499 +0000 UTC m=+0.157142537 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 03:55:06 localhost podman[101873]: 2025-11-23 08:55:06.292034232 +0000 UTC m=+0.188822663 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1)
Nov 23 03:55:06 localhost podman[101872]: unhealthy
Nov 23 03:55:06 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:55:06 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:55:06 localhost podman[101871]: 2025-11-23 08:55:06.364830676 +0000 UTC m=+0.273248618 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Nov 23 03:55:06 localhost podman[101871]: unhealthy
Nov 23 03:55:06 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:55:06 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:55:06 localhost podman[101873]: 2025-11-23 08:55:06.484938902 +0000 UTC m=+0.381727373 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, distribution-scope=public)
Nov 23 03:55:06 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:55:29 localhost podman[101937]: 2025-11-23 08:55:29.197830853 +0000 UTC m=+0.091396441 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Nov 23 03:55:29 localhost podman[101937]: 2025-11-23 08:55:29.248090695 +0000 UTC m=+0.141656303 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc.)
Nov 23 03:55:29 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:55:29 localhost podman[101939]: 2025-11-23 08:55:29.251454955 +0000 UTC m=+0.141025887 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Nov 23 03:55:29 localhost podman[101936]: 2025-11-23 08:55:29.308432137 +0000 UTC m=+0.207824380 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-cron-container)
Nov 23 03:55:29 localhost podman[101936]: 2025-11-23 08:55:29.31641954 +0000 UTC m=+0.215811823 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Nov 23 03:55:29 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:55:29 localhost podman[101938]: 2025-11-23 08:55:29.36362236 +0000 UTC m=+0.254692362 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:55:29 localhost podman[101939]: 2025-11-23 08:55:29.383518251 +0000 UTC m=+0.273089183 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container)
Nov 23 03:55:29 localhost podman[101938]: 2025-11-23 08:55:29.390175619 +0000 UTC m=+0.281245591 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com)
Nov 23 03:55:29 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:55:29 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:55:29 localhost podman[101945]: 2025-11-23 08:55:29.468570243 +0000 UTC m=+0.353814319 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64)
Nov 23 03:55:29 localhost podman[101952]: 2025-11-23 08:55:29.523467918 +0000 UTC m=+0.403104064 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:55:29 localhost podman[101952]: 2025-11-23 08:55:29.55086565 +0000 UTC m=+0.430501796 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, vcs-type=git)
Nov 23 03:55:29 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:55:29 localhost podman[101945]: 2025-11-23 08:55:29.603908177 +0000 UTC m=+0.489152193 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:55:29 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:55:35 localhost podman[102150]: 2025-11-23 08:55:35.17416863 +0000 UTC m=+0.078033635 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, tcib_managed=true, release=1761123044, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1)
Nov 23 03:55:35 localhost podman[102150]: 2025-11-23 08:55:35.543006398 +0000 UTC m=+0.446871373 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:55:35 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:55:37 localhost systemd[1]: tmp-crun.zSrcQk.mount: Deactivated successfully.
Nov 23 03:55:37 localhost podman[102173]: 2025-11-23 08:55:37.158148258 +0000 UTC m=+0.067470913 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Nov 23 03:55:37 localhost podman[102172]: 2025-11-23 08:55:37.224578731 +0000 UTC m=+0.135159450 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044)
Nov 23 03:55:37 localhost podman[102173]: 2025-11-23 08:55:37.238386309 +0000 UTC m=+0.147708954 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller)
Nov 23 03:55:37 localhost podman[102173]: unhealthy
Nov 23 03:55:37 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:55:37 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:55:37 localhost podman[102174]: 2025-11-23 08:55:37.198523535 +0000 UTC m=+0.100748301 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 23 03:55:37 localhost podman[102172]: 2025-11-23 08:55:37.268866164 +0000 UTC m=+0.179446893 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public)
Nov 23 03:55:37 localhost podman[102172]: unhealthy
Nov 23 03:55:37 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:55:37 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:55:37 localhost podman[102174]: 2025-11-23 08:55:37.425378503 +0000 UTC m=+0.327603279 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:55:37 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:56:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:56:00 localhost recover_tripleo_nova_virtqemud[102277]: 61733
Nov 23 03:56:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:56:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:56:00 localhost systemd[1]: tmp-crun.YdxcHq.mount: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102252]: 2025-11-23 08:56:00.199647623 +0000 UTC m=+0.084854236 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Nov 23 03:56:00 localhost podman[102240]: 2025-11-23 08:56:00.212695491 +0000 UTC m=+0.101196653 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1)
Nov 23 03:56:00 localhost podman[102238]: 2025-11-23 08:56:00.256674825 +0000 UTC m=+0.156770137 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, version=17.1.12)
Nov 23 03:56:00 localhost podman[102240]: 2025-11-23 08:56:00.265805069 +0000 UTC m=+0.154306261 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z)
Nov 23 03:56:00 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102238]: 2025-11-23 08:56:00.291744472 +0000 UTC m=+0.191839754 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Nov 23 03:56:00 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102252]: 2025-11-23 08:56:00.317740256 +0000 UTC m=+0.202946939 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:56:00 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102246]: 2025-11-23 08:56:00.330185268 +0000 UTC m=+0.211075477 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, container_name=collectd, batch=17.1_20251118.1)
Nov 23 03:56:00 localhost podman[102246]: 2025-11-23 08:56:00.341763808 +0000 UTC m=+0.222654047 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, release=1761123044)
Nov 23 03:56:00 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102258]: 2025-11-23 08:56:00.390094488 +0000 UTC m=+0.264682359 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, distribution-scope=public)
Nov 23 03:56:00 localhost podman[102239]: 2025-11-23 08:56:00.308975612 +0000 UTC m=+0.201928282 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64)
Nov 23 03:56:00 localhost podman[102258]: 2025-11-23 08:56:00.424949309 +0000 UTC m=+0.299537220 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Nov 23 03:56:00 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:56:00 localhost podman[102239]: 2025-11-23 08:56:00.443865884 +0000 UTC m=+0.336818524 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:56:00 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:56:06 localhost podman[102374]: 2025-11-23 08:56:06.177679844 +0000 UTC m=+0.084272431 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com)
Nov 23 03:56:06 localhost podman[102374]: 2025-11-23 08:56:06.564303388 +0000 UTC m=+0.470895975 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git)
Nov 23 03:56:06 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:56:08 localhost podman[102399]: 2025-11-23 08:56:08.166663226 +0000 UTC m=+0.076844333 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4)
Nov 23 03:56:08 localhost podman[102399]: 2025-11-23 08:56:08.183908786 +0000 UTC m=+0.094089953 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=)
Nov 23 03:56:08 localhost podman[102399]: unhealthy
Nov 23 03:56:08 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:56:08 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:56:08 localhost systemd[1]: tmp-crun.J711dS.mount: Deactivated successfully.
Nov 23 03:56:08 localhost podman[102401]: 2025-11-23 08:56:08.232663468 +0000 UTC m=+0.138585761 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-type=git, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd)
Nov 23 03:56:08 localhost podman[102400]: 2025-11-23 08:56:08.274634329 +0000 UTC m=+0.183047039 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ovn-controller)
Nov 23 03:56:08 localhost podman[102400]: 2025-11-23 08:56:08.287196445 +0000 UTC m=+0.195609105 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Nov 23 03:56:08 localhost podman[102400]: unhealthy
Nov 23 03:56:08 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:56:08 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:56:08 localhost podman[102401]: 2025-11-23 08:56:08.467719444 +0000 UTC m=+0.373641757 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:56:08 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:56:31 localhost podman[102476]: 2025-11-23 08:56:31.169669573 +0000 UTC m=+0.068102839 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Nov 23 03:56:31 localhost podman[102476]: 2025-11-23 08:56:31.179720052 +0000 UTC m=+0.078153308 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, container_name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:56:31 localhost systemd[1]: tmp-crun.gX9Mvf.mount: Deactivated successfully.
Nov 23 03:56:31 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:56:31 localhost systemd[1]: tmp-crun.0VdKlZ.mount: Deactivated successfully.
Nov 23 03:56:31 localhost podman[102472]: 2025-11-23 08:56:31.219090783 +0000 UTC m=+0.119584264 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:51:28Z, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:56:31 localhost podman[102470]: 2025-11-23 08:56:31.236821937 +0000 UTC m=+0.134944165 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:56:31 localhost podman[102471]: 2025-11-23 08:56:31.18824041 +0000 UTC m=+0.086941093 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-type=git)
Nov 23 03:56:31 localhost podman[102470]: 2025-11-23 08:56:31.290173551 +0000 UTC m=+0.188295779 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:56:31 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:56:31 localhost podman[102469]: 2025-11-23 08:56:31.295138714 +0000 UTC m=+0.199838697 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Nov 23 03:56:31 localhost podman[102472]: 2025-11-23 08:56:31.350606725 +0000 UTC m=+0.251100216 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public)
Nov 23 03:56:31 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:56:31 localhost podman[102471]: 2025-11-23 08:56:31.369397736 +0000 UTC m=+0.268098439 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:56:31 localhost podman[102469]: 2025-11-23 08:56:31.375957562 +0000 UTC m=+0.280657555 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Nov 23 03:56:31 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:56:31 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:56:31 localhost podman[102479]: 2025-11-23 08:56:31.448404897 +0000 UTC m=+0.339253111 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z)
Nov 23 03:56:31 localhost podman[102479]: 2025-11-23 08:56:31.472483419 +0000 UTC m=+0.363331603 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:56:31 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:56:33 localhost systemd[1]: tmp-crun.6bWGU8.mount: Deactivated successfully.
Nov 23 03:56:33 localhost podman[102705]: 2025-11-23 08:56:33.199672801 +0000 UTC m=+0.101877302 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 03:56:33 localhost podman[102705]: 2025-11-23 08:56:33.336654869 +0000 UTC m=+0.238859360 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55)
Nov 23 03:56:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:56:37 localhost podman[102849]: 2025-11-23 08:56:37.18570561 +0000 UTC m=+0.089461600 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Nov 23 03:56:37 localhost podman[102849]: 2025-11-23 08:56:37.560343303 +0000 UTC m=+0.464099283 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:56:37 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:56:39 localhost podman[102871]: 2025-11-23 08:56:39.168453104 +0000 UTC m=+0.078250460 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:56:39 localhost podman[102871]: 2025-11-23 08:56:39.212802239 +0000 UTC m=+0.122599625 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Nov 23 03:56:39 localhost podman[102871]: unhealthy
Nov 23 03:56:39 localhost podman[102872]: 2025-11-23 08:56:39.228355474 +0000 UTC m=+0.135923531 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, version=17.1.12)
Nov 23 03:56:39 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:56:39 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:56:39 localhost podman[102873]: 2025-11-23 08:56:39.286317521 +0000 UTC m=+0.189442219 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:56:39 localhost podman[102872]: 2025-11-23 08:56:39.300018577 +0000 UTC m=+0.207586594 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:56:39 localhost podman[102872]: unhealthy
Nov 23 03:56:39 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:56:39 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:56:39 localhost podman[102873]: 2025-11-23 08:56:39.482811229 +0000 UTC m=+0.385935917 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, tcib_managed=true, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z)
Nov 23 03:56:39 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:56:46 localhost sshd[102938]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:57:02 localhost podman[102954]: 2025-11-23 08:57:02.193743839 +0000 UTC m=+0.080839130 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Nov 23 03:57:02 localhost podman[102954]: 2025-11-23 08:57:02.256769321 +0000 UTC m=+0.143864612 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:57:02 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:57:02 localhost podman[102949]: 2025-11-23 08:57:02.257396508 +0000 UTC m=+0.147377826 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:57:02 localhost podman[102942]: 2025-11-23 08:57:02.30913563 +0000 UTC m=+0.201406569 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible)
Nov 23 03:57:02 localhost podman[102949]: 2025-11-23 08:57:02.341099463 +0000 UTC m=+0.231080801 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Nov 23 03:57:02 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:57:02 localhost podman[102950]: 2025-11-23 08:57:02.358667743 +0000 UTC m=+0.247834549 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:57:02 localhost podman[102950]: 2025-11-23 08:57:02.371114245 +0000 UTC m=+0.260281071 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, container_name=iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, release=1761123044)
Nov 23 03:57:02 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:57:02 localhost podman[102942]: 2025-11-23 08:57:02.43949534 +0000 UTC m=+0.331766349 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=)
Nov 23 03:57:02 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:57:02 localhost podman[102941]: 2025-11-23 08:57:02.457307766 +0000 UTC m=+0.354448885 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z)
Nov 23 03:57:02 localhost podman[102940]: 2025-11-23 08:57:02.16761116 +0000 UTC m=+0.072202918 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, release=1761123044)
Nov 23 03:57:02 localhost podman[102940]: 2025-11-23 08:57:02.504303041 +0000 UTC m=+0.408894799 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:57:02 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:57:02 localhost podman[102941]: 2025-11-23 08:57:02.55894512 +0000 UTC m=+0.456086269 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:57:02 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:57:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:57:08 localhost podman[103073]: 2025-11-23 08:57:08.172575131 +0000 UTC m=+0.079815433 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:57:08 localhost podman[103073]: 2025-11-23 08:57:08.54555362 +0000 UTC m=+0.452793982 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, container_name=nova_migration_target, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:57:08 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:57:08 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:57:08 localhost recover_tripleo_nova_virtqemud[103096]: 61733
Nov 23 03:57:08 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:57:08 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:57:10 localhost podman[103099]: 2025-11-23 08:57:10.165327893 +0000 UTC m=+0.072097876 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:57:10 localhost podman[103099]: 2025-11-23 08:57:10.181001642 +0000 UTC m=+0.087771665 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Nov 23 03:57:10 localhost podman[103099]: unhealthy
Nov 23 03:57:10 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:57:10 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:57:10 localhost podman[103098]: 2025-11-23 08:57:10.228237973 +0000 UTC m=+0.135946671 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://www.redhat.com, release=1761123044, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true)
Nov 23 03:57:10 localhost podman[103098]: 2025-11-23 08:57:10.268317023 +0000 UTC m=+0.176025731 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:57:10 localhost podman[103098]: unhealthy
Nov 23 03:57:10 localhost systemd[1]: tmp-crun.U30O7z.mount: Deactivated successfully.
Nov 23 03:57:10 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:57:10 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:57:10 localhost podman[103100]: 2025-11-23 08:57:10.288074981 +0000 UTC m=+0.188745861 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:57:10 localhost podman[103100]: 2025-11-23 08:57:10.508915888 +0000 UTC m=+0.409586738 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Nov 23 03:57:10 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:57:33 localhost podman[103165]: 2025-11-23 08:57:33.188600572 +0000 UTC m=+0.081319533 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:12:45Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64)
Nov 23 03:57:33 localhost podman[103165]: 2025-11-23 08:57:33.211080962 +0000 UTC m=+0.103799953 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Nov 23 03:57:33 localhost podman[103164]: 2025-11-23 08:57:33.167653972 +0000 UTC m=+0.067915644 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044)
Nov 23 03:57:33 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Deactivated successfully.
Nov 23 03:57:33 localhost podman[103177]: 2025-11-23 08:57:33.249487467 +0000 UTC m=+0.136000652 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:57:33 localhost podman[103164]: 2025-11-23 08:57:33.301829125 +0000 UTC m=+0.202090837 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:57:33 localhost podman[103163]: 2025-11-23 08:57:33.302122773 +0000 UTC m=+0.203399492 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=)
Nov 23 03:57:33 localhost podman[103163]: 2025-11-23 08:57:33.314168355 +0000 UTC m=+0.215445054 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Nov 23 03:57:33 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:57:33 localhost podman[103176]: 2025-11-23 08:57:33.348262865 +0000 UTC m=+0.236116216 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, vcs-type=git, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, distribution-scope=public)
Nov 23 03:57:33 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Deactivated successfully.
Nov 23 03:57:33 localhost podman[103176]: 2025-11-23 08:57:33.391874279 +0000 UTC m=+0.279727650 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 03:57:33 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:57:33 localhost podman[103170]: 2025-11-23 08:57:33.403283634 +0000 UTC m=+0.293334093 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 03:57:33 localhost podman[103170]: 2025-11-23 08:57:33.417804552 +0000 UTC m=+0.307854991 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, version=17.1.12, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1)
Nov 23 03:57:33 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:57:33 localhost podman[103177]: 2025-11-23 08:57:33.429648148 +0000 UTC m=+0.316161353 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute)
Nov 23 03:57:33 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Deactivated successfully.
Nov 23 03:57:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:57:39 localhost systemd[1]: tmp-crun.O2T5ZE.mount: Deactivated successfully.
Nov 23 03:57:39 localhost podman[103367]: 2025-11-23 08:57:39.201099963 +0000 UTC m=+0.105403756 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:57:39 localhost podman[103367]: 2025-11-23 08:57:39.579923669 +0000 UTC m=+0.484227492 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:57:39 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:57:41 localhost podman[103392]: 2025-11-23 08:57:41.188399099 +0000 UTC m=+0.092076328 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public)
Nov 23 03:57:41 localhost podman[103392]: 2025-11-23 08:57:41.199088995 +0000 UTC m=+0.102766274 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:57:41 localhost podman[103392]: unhealthy
Nov 23 03:57:41 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:57:41 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:57:41 localhost podman[103393]: 2025-11-23 08:57:41.233621648 +0000 UTC m=+0.134429231 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:57:41 localhost podman[103394]: 2025-11-23 08:57:41.294648827 +0000 UTC m=+0.191217637 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:57:41 localhost podman[103393]: 2025-11-23 08:57:41.31985859 +0000 UTC m=+0.220666153 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller)
Nov 23 03:57:41 localhost podman[103393]: unhealthy
Nov 23 03:57:41 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:57:41 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:57:41 localhost podman[103394]: 2025-11-23 08:57:41.511642942 +0000 UTC m=+0.408211692 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 03:57:41 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:57:48 localhost sshd[103461]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:58:04 localhost podman[103464]: 2025-11-23 08:58:04.232913766 +0000 UTC m=+0.130780513 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:58:04 localhost podman[103464]: 2025-11-23 08:58:04.245601634 +0000 UTC m=+0.143468371 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:58:04 localhost podman[103464]: unhealthy
Nov 23 03:58:04 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:04 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:58:04 localhost podman[103475]: 2025-11-23 08:58:04.184925794 +0000 UTC m=+0.070973896 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:58:04 localhost podman[103466]: 2025-11-23 08:58:04.346825737 +0000 UTC m=+0.237645097 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, version=17.1.12, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Nov 23 03:58:04 localhost podman[103472]: 2025-11-23 08:58:04.216616481 +0000 UTC m=+0.105334274 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, container_name=iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:58:04 localhost podman[103463]: 2025-11-23 08:58:04.398637551 +0000 UTC m=+0.301623435 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12)
Nov 23 03:58:04 localhost podman[103472]: 2025-11-23 08:58:04.401151408 +0000 UTC m=+0.289869271 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Nov 23 03:58:04 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:58:04 localhost podman[103475]: 2025-11-23 08:58:04.426271809 +0000 UTC m=+0.312319991 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public)
Nov 23 03:58:04 localhost podman[103475]: unhealthy
Nov 23 03:58:04 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:04 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 03:58:04 localhost podman[103463]: 2025-11-23 08:58:04.459598378 +0000 UTC m=+0.362584272 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vcs-type=git)
Nov 23 03:58:04 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:58:04 localhost podman[103465]: 2025-11-23 08:58:04.326982097 +0000 UTC m=+0.218879205 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044)
Nov 23 03:58:04 localhost podman[103466]: 2025-11-23 08:58:04.483035195 +0000 UTC m=+0.373854595 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3)
Nov 23 03:58:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:58:04 localhost podman[103465]: 2025-11-23 08:58:04.506980634 +0000 UTC m=+0.398877752 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4)
Nov 23 03:58:04 localhost podman[103465]: unhealthy
Nov 23 03:58:04 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:04 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 03:58:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:58:10 localhost podman[103584]: 2025-11-23 08:58:10.172203742 +0000 UTC m=+0.077254914 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:58:10 localhost podman[103584]: 2025-11-23 08:58:10.578162483 +0000 UTC m=+0.483213605 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Nov 23 03:58:10 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:58:12 localhost podman[103607]: 2025-11-23 08:58:12.179633896 +0000 UTC m=+0.083645534 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:58:12 localhost podman[103608]: 2025-11-23 08:58:12.237583984 +0000 UTC m=+0.135246922 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, release=1761123044, name=rhosp17/openstack-ovn-controller, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Nov 23 03:58:12 localhost podman[103607]: 2025-11-23 08:58:12.249393519 +0000 UTC m=+0.153405137 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, tcib_managed=true)
Nov 23 03:58:12 localhost podman[103607]: unhealthy
Nov 23 03:58:12 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:12 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:58:12 localhost podman[103609]: 2025-11-23 08:58:12.339572577 +0000 UTC m=+0.235312084 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Nov 23 03:58:12 localhost podman[103608]: 2025-11-23 08:58:12.357471325 +0000 UTC m=+0.255134293 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:58:12 localhost podman[103608]: unhealthy
Nov 23 03:58:12 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:12 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:58:12 localhost podman[103609]: 2025-11-23 08:58:12.567012521 +0000 UTC m=+0.462752038 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, version=17.1.12)
Nov 23 03:58:12 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:58:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 03:58:34 localhost recover_tripleo_nova_virtqemud[103679]: 61733
Nov 23 03:58:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 03:58:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:58:35 localhost podman[103680]: 2025-11-23 08:58:35.196154294 +0000 UTC m=+0.099566319 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:58:35 localhost podman[103681]: 2025-11-23 08:58:35.221006988 +0000 UTC m=+0.121966157 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:58:35 localhost podman[103682]: 2025-11-23 08:58:35.180905678 +0000 UTC m=+0.079257117 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, version=17.1.12)
Nov 23 03:58:35 localhost podman[103680]: 2025-11-23 08:58:35.235036033 +0000 UTC m=+0.138448018 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, tcib_managed=true)
Nov 23 03:58:35 localhost podman[103681]: 2025-11-23 08:58:35.240870579 +0000 UTC m=+0.141829778 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 03:58:35 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:58:35 localhost podman[103681]: unhealthy
Nov 23 03:58:35 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:35 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:58:35 localhost podman[103682]: 2025-11-23 08:58:35.264832279 +0000 UTC m=+0.163183708 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi)
Nov 23 03:58:35 localhost podman[103682]: unhealthy
Nov 23 03:58:35 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:35 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 03:58:35 localhost podman[103694]: 2025-11-23 08:58:35.311640039 +0000 UTC m=+0.200649219 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-type=git, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:58:35 localhost podman[103694]: 2025-11-23 08:58:35.321966584 +0000 UTC m=+0.210975724 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, com.redhat.component=openstack-iscsid-container, version=17.1.12, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git)
Nov 23 03:58:35 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:58:35 localhost podman[103688]: 2025-11-23 08:58:35.413858398 +0000 UTC m=+0.304195584 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, version=17.1.12, release=1761123044, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64)
Nov 23 03:58:35 localhost podman[103688]: 2025-11-23 08:58:35.455042458 +0000 UTC m=+0.345379644 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, url=https://www.redhat.com)
Nov 23 03:58:35 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:58:35 localhost podman[103701]: 2025-11-23 08:58:35.462774635 +0000 UTC m=+0.344915392 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true)
Nov 23 03:58:35 localhost podman[103701]: 2025-11-23 08:58:35.542327779 +0000 UTC m=+0.424468516 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 03:58:35 localhost podman[103701]: unhealthy
Nov 23 03:58:35 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:35 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 03:58:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:58:41 localhost podman[103873]: 2025-11-23 08:58:41.176649621 +0000 UTC m=+0.079791661 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:58:41 localhost podman[103873]: 2025-11-23 08:58:41.54999232 +0000 UTC m=+0.453134390 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:58:41 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:58:43 localhost podman[103896]: 2025-11-23 08:58:43.184812626 +0000 UTC m=+0.090387835 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Nov 23 03:58:43 localhost podman[103896]: 2025-11-23 08:58:43.229027866 +0000 UTC m=+0.134603085 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:58:43 localhost podman[103896]: unhealthy
Nov 23 03:58:43 localhost podman[103898]: 2025-11-23 08:58:43.237619476 +0000 UTC m=+0.138338035 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:58:43 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:43 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:58:43 localhost podman[103897]: 2025-11-23 08:58:43.288630168 +0000 UTC m=+0.193279392 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z)
Nov 23 03:58:43 localhost podman[103897]: 2025-11-23 08:58:43.301782319 +0000 UTC m=+0.206431523 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Nov 23 03:58:43 localhost podman[103897]: unhealthy
Nov 23 03:58:43 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:58:43 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:58:43 localhost podman[103898]: 2025-11-23 08:58:43.471906062 +0000 UTC m=+0.372624611 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:58:43 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:58:47 localhost sshd[103966]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:58:55 localhost sshd[103968]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:59:03 localhost sshd[103970]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:59:03 localhost systemd-logind[761]: New session 35 of user zuul.
Nov 23 03:59:03 localhost systemd[1]: Started Session 35 of User zuul.
Nov 23 03:59:04 localhost python3.9[104065]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:59:05 localhost podman[104162]: 2025-11-23 08:59:05.554615675 +0000 UTC m=+0.086489741 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 03:59:05 localhost podman[104163]: 2025-11-23 08:59:05.569075001 +0000 UTC m=+0.092505642 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com)
Nov 23 03:59:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:59:05 localhost podman[104159]: 2025-11-23 08:59:05.573307774 +0000 UTC m=+0.098850340 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 03:59:05 localhost podman[104163]: 2025-11-23 08:59:05.59863877 +0000 UTC m=+0.122069481 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 03:59:05 localhost podman[104162]: 2025-11-23 08:59:05.606935932 +0000 UTC m=+0.138809998 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Nov 23 03:59:05 localhost podman[104162]: unhealthy
Nov 23 03:59:05 localhost systemd[1]: tmp-crun.KOTSBk.mount: Deactivated successfully.
Nov 23 03:59:05 localhost podman[104161]: 2025-11-23 08:59:05.61694103 +0000 UTC m=+0.147230063 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute)
Nov 23 03:59:05 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:05 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 03:59:05 localhost python3.9[104160]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:59:05 localhost podman[104159]: 2025-11-23 08:59:05.662059624 +0000 UTC m=+0.187602220 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Nov 23 03:59:05 localhost podman[104229]: 2025-11-23 08:59:05.679665734 +0000 UTC m=+0.093009414 container health_status e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 03:59:05 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:59:05 localhost podman[104229]: 2025-11-23 08:59:05.709091289 +0000 UTC m=+0.122434979 container exec_died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Nov 23 03:59:05 localhost podman[104229]: unhealthy
Nov 23 03:59:05 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:05 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 03:59:05 localhost podman[104212]: 2025-11-23 08:59:05.786874637 +0000 UTC m=+0.218553737 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 03:59:05 localhost podman[104161]: 2025-11-23 08:59:05.815641435 +0000 UTC m=+0.345930368 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Nov 23 03:59:05 localhost podman[104161]: unhealthy
Nov 23 03:59:05 localhost podman[104212]: 2025-11-23 08:59:05.826620128 +0000 UTC m=+0.258299228 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_id=tripleo_step3)
Nov 23 03:59:05 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:05 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:59:05 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:59:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:59:06 localhost python3.9[104369]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 03:59:07 localhost python3.9[104463]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:59:07 localhost python3.9[104556]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 03:59:08 localhost python3.9[104647]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 03:59:10 localhost python3.9[104737]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 03:59:11 localhost python3.9[104829]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 03:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:59:11 localhost podman[104876]: 2025-11-23 08:59:11.909983342 +0000 UTC m=+0.066682842 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 03:59:12 localhost podman[104876]: 2025-11-23 08:59:12.270962131 +0000 UTC m=+0.427661621 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, release=1761123044, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute)
Nov 23 03:59:12 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:59:12 localhost python3.9[104942]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 03:59:13 localhost python3.9[104990]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 03:59:13 localhost systemd[1]: session-35.scope: Deactivated successfully.
Nov 23 03:59:13 localhost systemd[1]: session-35.scope: Consumed 4.911s CPU time.
Nov 23 03:59:13 localhost systemd-logind[761]: Session 35 logged out. Waiting for processes to exit.
Nov 23 03:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:59:13 localhost systemd-logind[761]: Removed session 35.
Nov 23 03:59:13 localhost podman[105007]: 2025-11-23 08:59:13.986741788 +0000 UTC m=+0.076990826 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Nov 23 03:59:14 localhost podman[105007]: 2025-11-23 08:59:14.002954811 +0000 UTC m=+0.093203879 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12)
Nov 23 03:59:14 localhost podman[105007]: unhealthy
Nov 23 03:59:14 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:14 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:59:14 localhost podman[105008]: 2025-11-23 08:59:14.086780489 +0000 UTC m=+0.171459179 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:59:14 localhost systemd[1]: tmp-crun.cuAND3.mount: Deactivated successfully.
Nov 23 03:59:14 localhost podman[105006]: 2025-11-23 08:59:14.148813406 +0000 UTC m=+0.239381013 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:59:14 localhost podman[105006]: 2025-11-23 08:59:14.191935108 +0000 UTC m=+0.282502675 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12)
Nov 23 03:59:14 localhost podman[105006]: unhealthy
Nov 23 03:59:14 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:14 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:59:14 localhost podman[105008]: 2025-11-23 08:59:14.275814126 +0000 UTC m=+0.360492826 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1761123044, container_name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container)
Nov 23 03:59:14 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:59:20 localhost sshd[105074]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 03:59:21 localhost systemd-logind[761]: New session 36 of user zuul.
Nov 23 03:59:21 localhost systemd[1]: Started Session 36 of User zuul.
Nov 23 03:59:22 localhost python3.9[105169]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 03:59:22 localhost systemd[1]: Reloading.
Nov 23 03:59:22 localhost systemd-rc-local-generator[105195]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:59:22 localhost systemd-sysv-generator[105199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:59:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:59:23 localhost python3.9[105294]: ansible-ansible.builtin.service_facts Invoked
Nov 23 03:59:23 localhost network[105311]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 03:59:23 localhost network[105312]: 'network-scripts' will be removed from distribution in near future.
Nov 23 03:59:23 localhost network[105313]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 03:59:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:59:27 localhost python3.9[105511]: ansible-ansible.builtin.service_facts Invoked
Nov 23 03:59:27 localhost network[105528]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 03:59:27 localhost network[105529]: 'network-scripts' will be removed from distribution in near future.
Nov 23 03:59:27 localhost network[105530]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 03:59:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:59:31 localhost python3.9[105730]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 03:59:32 localhost systemd[1]: Reloading.
Nov 23 03:59:32 localhost systemd-rc-local-generator[105755]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 03:59:32 localhost systemd-sysv-generator[105760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 03:59:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 03:59:32 localhost systemd[1]: Stopping ceilometer_agent_compute container...
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 03:59:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 03:59:36 localhost podman[105784]: 2025-11-23 08:59:36.20724897 +0000 UTC m=+0.101424619 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team)
Nov 23 03:59:36 localhost systemd[1]: tmp-crun.mTGGxn.mount: Deactivated successfully.
Nov 23 03:59:36 localhost podman[105804]: Error: container e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 is not running
Nov 23 03:59:36 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=125/n/a
Nov 23 03:59:36 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 03:59:36 localhost podman[105801]: 2025-11-23 08:59:36.224910102 +0000 UTC m=+0.102573870 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 03:59:36 localhost podman[105784]: 2025-11-23 08:59:36.301246121 +0000 UTC m=+0.195421810 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron)
Nov 23 03:59:36 localhost podman[105801]: 2025-11-23 08:59:36.308966257 +0000 UTC m=+0.186630065 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid)
Nov 23 03:59:36 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 03:59:36 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 03:59:36 localhost podman[105786]: 2025-11-23 08:59:36.334371815 +0000 UTC m=+0.221452474 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Nov 23 03:59:36 localhost podman[105785]: 2025-11-23 08:59:36.352341405 +0000 UTC m=+0.241054778 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Nov 23 03:59:36 localhost podman[105785]: 2025-11-23 08:59:36.376034988 +0000 UTC m=+0.264748361 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Nov 23 03:59:36 localhost podman[105785]: unhealthy
Nov 23 03:59:36 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:36 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 03:59:36 localhost podman[105792]: 2025-11-23 08:59:36.286954339 +0000 UTC m=+0.166673242 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git)
Nov 23 03:59:36 localhost podman[105792]: 2025-11-23 08:59:36.41696427 +0000 UTC m=+0.296683133 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, vcs-type=git, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3)
Nov 23 03:59:36 localhost podman[105786]: 2025-11-23 08:59:36.426548727 +0000 UTC m=+0.313629366 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 03:59:36 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 03:59:36 localhost podman[105786]: unhealthy
Nov 23 03:59:36 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:36 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 03:59:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59221 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=1272489113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75750DD40000000001030307) 
Nov 23 03:59:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34458 DF PROTO=TCP SPT=37832 DPT=9882 SEQ=36527219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575118C0000000001030307) 
Nov 23 03:59:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59222 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=1272489113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757511D90000000001030307) 
Nov 23 03:59:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34459 DF PROTO=TCP SPT=37832 DPT=9882 SEQ=36527219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757515990000000001030307) 
Nov 23 03:59:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 03:59:42 localhost podman[105972]: 2025-11-23 08:59:42.426815441 +0000 UTC m=+0.078597230 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4)
Nov 23 03:59:42 localhost podman[105972]: 2025-11-23 08:59:42.839223274 +0000 UTC m=+0.491005103 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target)
Nov 23 03:59:42 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 03:59:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59223 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=1272489113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757519DA0000000001030307) 
Nov 23 03:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 03:59:44 localhost podman[105995]: 2025-11-23 08:59:44.181744133 +0000 UTC m=+0.089898222 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, architecture=x86_64, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1)
Nov 23 03:59:44 localhost podman[105995]: 2025-11-23 08:59:44.227005711 +0000 UTC m=+0.135159800 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, tcib_managed=true)
Nov 23 03:59:44 localhost podman[105995]: unhealthy
Nov 23 03:59:44 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:44 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 03:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 03:59:44 localhost podman[106015]: 2025-11-23 08:59:44.358862362 +0000 UTC m=+0.083398778 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 03:59:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34460 DF PROTO=TCP SPT=37832 DPT=9882 SEQ=36527219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75751D990000000001030307) 
Nov 23 03:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 03:59:44 localhost podman[106015]: 2025-11-23 08:59:44.41009294 +0000 UTC m=+0.134629336 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.)
Nov 23 03:59:44 localhost podman[106015]: unhealthy
Nov 23 03:59:44 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 03:59:44 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 03:59:44 localhost podman[106035]: 2025-11-23 08:59:44.48275849 +0000 UTC m=+0.090128347 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Nov 23 03:59:44 localhost podman[106035]: 2025-11-23 08:59:44.724172708 +0000 UTC m=+0.331542555 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Nov 23 03:59:44 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 03:59:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59224 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=1272489113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757529990000000001030307) 
Nov 23 03:59:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34461 DF PROTO=TCP SPT=37832 DPT=9882 SEQ=36527219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75752D590000000001030307) 
Nov 23 03:59:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13503 DF PROTO=TCP SPT=48902 DPT=9100 SEQ=1460293765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757534260000000001030307) 
Nov 23 03:59:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13504 DF PROTO=TCP SPT=48902 DPT=9100 SEQ=1460293765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575381A0000000001030307) 
Nov 23 03:59:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26992 DF PROTO=TCP SPT=49332 DPT=9105 SEQ=479738867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75753C840000000001030307) 
Nov 23 03:59:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13505 DF PROTO=TCP SPT=48902 DPT=9100 SEQ=1460293765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575401A0000000001030307) 
Nov 23 03:59:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5618 DF PROTO=TCP SPT=41782 DPT=9102 SEQ=48251720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575405F0000000001030307) 
Nov 23 03:59:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26993 DF PROTO=TCP SPT=49332 DPT=9105 SEQ=479738867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575409A0000000001030307) 
Nov 23 03:59:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5619 DF PROTO=TCP SPT=41782 DPT=9102 SEQ=48251720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757544590000000001030307) 
Nov 23 03:59:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26994 DF PROTO=TCP SPT=49332 DPT=9105 SEQ=479738867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757548990000000001030307) 
Nov 23 03:59:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59225 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=1272489113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75754ADA0000000001030307) 
Nov 23 03:59:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5620 DF PROTO=TCP SPT=41782 DPT=9102 SEQ=48251720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75754C590000000001030307) 
Nov 23 03:59:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34462 DF PROTO=TCP SPT=37832 DPT=9882 SEQ=36527219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75754CD90000000001030307) 
Nov 23 03:59:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13506 DF PROTO=TCP SPT=48902 DPT=9100 SEQ=1460293765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75754FD90000000001030307) 
Nov 23 03:59:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26995 DF PROTO=TCP SPT=49332 DPT=9105 SEQ=479738867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757558590000000001030307) 
Nov 23 04:00:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13507 DF PROTO=TCP SPT=48902 DPT=9100 SEQ=1460293765 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757570DA0000000001030307) 
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 04:00:06 localhost podman[106068]: 2025-11-23 09:00:06.68407152 +0000 UTC m=+0.083141470 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com)
Nov 23 04:00:06 localhost podman[106068]: 2025-11-23 09:00:06.719162778 +0000 UTC m=+0.118232768 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Nov 23 04:00:06 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 04:00:06 localhost podman[106069]: 2025-11-23 09:00:06.735834413 +0000 UTC m=+0.133898837 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:00:06 localhost podman[106069]: 2025-11-23 09:00:06.752360534 +0000 UTC m=+0.150424938 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=nova_compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git)
Nov 23 04:00:06 localhost podman[106071]: 2025-11-23 09:00:06.784220145 +0000 UTC m=+0.173416512 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.expose-services=)
Nov 23 04:00:06 localhost podman[106071]: 2025-11-23 09:00:06.790121082 +0000 UTC m=+0.179317429 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Nov 23 04:00:06 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 04:00:06 localhost podman[106069]: unhealthy
Nov 23 04:00:06 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:06 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 04:00:06 localhost podman[106079]: 2025-11-23 09:00:06.85633396 +0000 UTC m=+0.242360113 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 04:00:06 localhost podman[106083]: Error: container e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 is not running
Nov 23 04:00:06 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Main process exited, code=exited, status=125/n/a
Nov 23 04:00:06 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed with result 'exit-code'.
Nov 23 04:00:06 localhost podman[106079]: 2025-11-23 09:00:06.895151107 +0000 UTC m=+0.281177230 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, version=17.1.12, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:00:06 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 04:00:06 localhost podman[106070]: 2025-11-23 09:00:06.946125008 +0000 UTC m=+0.339451435 container health_status 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.12)
Nov 23 04:00:06 localhost podman[106070]: 2025-11-23 09:00:06.991190681 +0000 UTC m=+0.384517118 container exec_died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:00:06 localhost podman[106070]: unhealthy
Nov 23 04:00:07 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:07 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 04:00:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26996 DF PROTO=TCP SPT=49332 DPT=9105 SEQ=479738867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757578D90000000001030307) 
Nov 23 04:00:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5622 DF PROTO=TCP SPT=41782 DPT=9102 SEQ=48251720 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75757CD90000000001030307) 
Nov 23 04:00:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15450 DF PROTO=TCP SPT=49618 DPT=9882 SEQ=3693669544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757586BC0000000001030307) 
Nov 23 04:00:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 04:00:13 localhost podman[106179]: 2025-11-23 09:00:13.162913934 +0000 UTC m=+0.073893084 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Nov 23 04:00:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38290 DF PROTO=TCP SPT=49292 DPT=9101 SEQ=1061767708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75758F190000000001030307) 
Nov 23 04:00:13 localhost podman[106179]: 2025-11-23 09:00:13.536976663 +0000 UTC m=+0.447955843 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 04:00:13 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 04:00:14 localhost podman[105771]: time="2025-11-23T09:00:14Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL"
Nov 23 04:00:15 localhost podman[105771]: 2025-11-23 09:00:15.006274987 +0000 UTC m=+42.094398143 container died e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z)
Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 04:00:15 localhost systemd[1]: libpod-e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.scope: Deactivated successfully.
Nov 23 04:00:15 localhost systemd[1]: libpod-e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.scope: Consumed 5.932s CPU time.
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.timer: Deactivated successfully.
Nov 23 04:00:15 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed to open /run/systemd/transient/e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: No such file or directory
Nov 23 04:00:15 localhost systemd[1]: tmp-crun.Q3K4hv.mount: Deactivated successfully.
Nov 23 04:00:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931-userdata-shm.mount: Deactivated successfully.
Nov 23 04:00:15 localhost podman[106205]: 2025-11-23 09:00:15.124320439 +0000 UTC m=+0.093118177 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible)
Nov 23 04:00:15 localhost podman[106211]: 2025-11-23 09:00:15.233610058 +0000 UTC m=+0.202326834 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4)
Nov 23 04:00:15 localhost podman[105771]: 2025-11-23 09:00:15.255452851 +0000 UTC m=+42.343575997 container cleanup e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Nov 23 04:00:15 localhost podman[105771]: ceilometer_agent_compute
Nov 23 04:00:15 localhost podman[106211]: 2025-11-23 09:00:15.257818875 +0000 UTC m=+0.226535641 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, distribution-scope=public)
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.timer: Failed to open /run/systemd/transient/e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.timer: No such file or directory
Nov 23 04:00:15 localhost podman[106211]: unhealthy
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed to open /run/systemd/transient/e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: No such file or directory
Nov 23 04:00:15 localhost podman[106204]: 2025-11-23 09:00:15.266177658 +0000 UTC m=+0.247519420 container cleanup e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, release=1761123044, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container)
Nov 23 04:00:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:15 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:00:15 localhost systemd[1]: libpod-conmon-e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.scope: Deactivated successfully.
Nov 23 04:00:15 localhost podman[106212]: 2025-11-23 09:00:15.152701197 +0000 UTC m=+0.116977124 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Nov 23 04:00:15 localhost podman[106212]: 2025-11-23 09:00:15.332847328 +0000 UTC m=+0.297123295 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public)
Nov 23 04:00:15 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.timer: Failed to open /run/systemd/transient/e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.timer: No such file or directory
Nov 23 04:00:15 localhost systemd[1]: e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: Failed to open /run/systemd/transient/e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931.service: No such file or directory
Nov 23 04:00:15 localhost podman[106285]: 2025-11-23 09:00:15.38046505 +0000 UTC m=+0.078649601 container cleanup e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Nov 23 04:00:15 localhost podman[106285]: ceilometer_agent_compute
Nov 23 04:00:15 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully.
Nov 23 04:00:15 localhost systemd[1]: Stopped ceilometer_agent_compute container.
Nov 23 04:00:15 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.127s CPU time, no IO.
Nov 23 04:00:15 localhost podman[106205]: 2025-11-23 09:00:15.414972941 +0000 UTC m=+0.383770749 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git)
Nov 23 04:00:15 localhost podman[106205]: unhealthy
Nov 23 04:00:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:15 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:00:15 localhost systemd[1]: var-lib-containers-storage-overlay-beb9ea471cf56dc90569406e830c1ff52ff8d53f4c5c1a577e872b53deec2a8c-merged.mount: Deactivated successfully.
Nov 23 04:00:16 localhost python3.9[106390]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:00:16 localhost systemd[1]: Reloading.
Nov 23 04:00:16 localhost systemd-rc-local-generator[106415]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:00:16 localhost systemd-sysv-generator[106420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:00:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:00:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 04:00:16 localhost systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 23 04:00:16 localhost recover_tripleo_nova_virtqemud[106431]: 61733
Nov 23 04:00:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 04:00:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 04:00:16 localhost systemd[1]: tmp-crun.1fLQr7.mount: Deactivated successfully.
Nov 23 04:00:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38291 DF PROTO=TCP SPT=49292 DPT=9101 SEQ=1061767708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75759ED90000000001030307) 
Nov 23 04:00:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44360 DF PROTO=TCP SPT=54616 DPT=9100 SEQ=3703200369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575A9570000000001030307) 
Nov 23 04:00:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44362 DF PROTO=TCP SPT=54616 DPT=9100 SEQ=3703200369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575B5590000000001030307) 
Nov 23 04:00:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57977 DF PROTO=TCP SPT=55912 DPT=9105 SEQ=3862644144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575BDDA0000000001030307) 
Nov 23 04:00:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57978 DF PROTO=TCP SPT=55912 DPT=9105 SEQ=3862644144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575CD990000000001030307) 
Nov 23 04:00:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44364 DF PROTO=TCP SPT=54616 DPT=9100 SEQ=3703200369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575E4DA0000000001030307) 
Nov 23 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 04:00:37 localhost podman[106448]: 2025-11-23 09:00:37.191500899 +0000 UTC m=+0.094901815 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:00:37 localhost podman[106449]: Error: container 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 is not running
Nov 23 04:00:37 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Main process exited, code=exited, status=125/n/a
Nov 23 04:00:37 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed with result 'exit-code'.
Nov 23 04:00:37 localhost podman[106466]: 2025-11-23 09:00:37.3012814 +0000 UTC m=+0.189604664 container health_status aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid)
Nov 23 04:00:37 localhost podman[106466]: 2025-11-23 09:00:37.338054782 +0000 UTC m=+0.226378056 container exec_died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Nov 23 04:00:37 localhost podman[106447]: 2025-11-23 09:00:37.351887562 +0000 UTC m=+0.255630087 container health_status 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4)
Nov 23 04:00:37 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Deactivated successfully.
Nov 23 04:00:37 localhost podman[106447]: 2025-11-23 09:00:37.384216954 +0000 UTC m=+0.287959409 container exec_died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron)
Nov 23 04:00:37 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Deactivated successfully.
Nov 23 04:00:37 localhost podman[106458]: 2025-11-23 09:00:37.394678304 +0000 UTC m=+0.288478354 container health_status 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step3, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Nov 23 04:00:37 localhost podman[106458]: 2025-11-23 09:00:37.405088462 +0000 UTC m=+0.298888522 container exec_died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, version=17.1.12, container_name=collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Nov 23 04:00:37 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Deactivated successfully.
Nov 23 04:00:37 localhost podman[106448]: 2025-11-23 09:00:37.465244228 +0000 UTC m=+0.368645154 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, release=1761123044, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 04:00:37 localhost podman[106448]: unhealthy
Nov 23 04:00:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 04:00:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57979 DF PROTO=TCP SPT=55912 DPT=9105 SEQ=3862644144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575EED90000000001030307) 
Nov 23 04:00:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12806 DF PROTO=TCP SPT=50526 DPT=9102 SEQ=2047034120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575F0D90000000001030307) 
Nov 23 04:00:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48458 DF PROTO=TCP SPT=40080 DPT=9882 SEQ=1019307030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7575FBED0000000001030307) 
Nov 23 04:00:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41548 DF PROTO=TCP SPT=55886 DPT=9101 SEQ=3723344017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757604590000000001030307) 
Nov 23 04:00:43 localhost sshd[106613]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 04:00:43 localhost podman[106615]: 2025-11-23 09:00:43.924360696 +0000 UTC m=+0.076310719 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=)
Nov 23 04:00:44 localhost podman[106615]: 2025-11-23 09:00:44.266079071 +0000 UTC m=+0.418029144 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4)
Nov 23 04:00:44 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 04:00:46 localhost systemd[1]: tmp-crun.HJYgPN.mount: Deactivated successfully.
Nov 23 04:00:46 localhost podman[106641]: 2025-11-23 09:00:46.195762709 +0000 UTC m=+0.101449250 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64)
Nov 23 04:00:46 localhost podman[106642]: 2025-11-23 09:00:46.24375243 +0000 UTC m=+0.143980695 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller)
Nov 23 04:00:46 localhost podman[106643]: 2025-11-23 09:00:46.301720049 +0000 UTC m=+0.201994215 container health_status c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, container_name=metrics_qdr, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:00:46 localhost podman[106642]: 2025-11-23 09:00:46.314084518 +0000 UTC m=+0.214312823 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, distribution-scope=public, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container)
Nov 23 04:00:46 localhost podman[106642]: unhealthy
Nov 23 04:00:46 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:46 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:00:46 localhost podman[106641]: 2025-11-23 09:00:46.367950806 +0000 UTC m=+0.273637387 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 04:00:46 localhost podman[106641]: unhealthy
Nov 23 04:00:46 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:00:46 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:00:46 localhost podman[106643]: 2025-11-23 09:00:46.500475356 +0000 UTC m=+0.400749462 container exec_died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true)
Nov 23 04:00:46 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Deactivated successfully.
Nov 23 04:00:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41549 DF PROTO=TCP SPT=55886 DPT=9101 SEQ=3723344017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576141A0000000001030307) 
Nov 23 04:00:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9682 DF PROTO=TCP SPT=55454 DPT=9100 SEQ=3231694529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75761E870000000001030307) 
Nov 23 04:00:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9684 DF PROTO=TCP SPT=55454 DPT=9100 SEQ=3231694529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75762A9A0000000001030307) 
Nov 23 04:00:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60432 DF PROTO=TCP SPT=49512 DPT=9105 SEQ=2877418657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757632DA0000000001030307) 
Nov 23 04:00:58 localhost podman[106433]: time="2025-11-23T09:00:58Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL"
Nov 23 04:00:58 localhost systemd[1]: libpod-88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.scope: Deactivated successfully.
Nov 23 04:00:58 localhost systemd[1]: libpod-88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.scope: Consumed 6.485s CPU time.
Nov 23 04:00:58 localhost podman[106433]: 2025-11-23 09:00:58.630178184 +0000 UTC m=+42.083019732 container died 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.timer: Deactivated successfully.
Nov 23 04:00:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed to open /run/systemd/transient/88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: No such file or directory
Nov 23 04:00:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066-userdata-shm.mount: Deactivated successfully.
Nov 23 04:00:58 localhost systemd[1]: var-lib-containers-storage-overlay-d30b74a826698ec4802daa9c13e3cae799596770c17ebe45666d9bdf35d0eb90-merged.mount: Deactivated successfully.
Nov 23 04:00:58 localhost podman[106433]: 2025-11-23 09:00:58.686864438 +0000 UTC m=+42.139705996 container cleanup 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:00:58 localhost podman[106433]: ceilometer_agent_ipmi
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.timer: Failed to open /run/systemd/transient/88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.timer: No such file or directory
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed to open /run/systemd/transient/88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: No such file or directory
Nov 23 04:00:58 localhost podman[106713]: 2025-11-23 09:00:58.721554674 +0000 UTC m=+0.081654901 container cleanup 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Nov 23 04:00:58 localhost systemd[1]: libpod-conmon-88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.scope: Deactivated successfully.
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.timer: Failed to open /run/systemd/transient/88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.timer: No such file or directory
Nov 23 04:00:58 localhost systemd[1]: 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: Failed to open /run/systemd/transient/88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066.service: No such file or directory
Nov 23 04:00:58 localhost podman[106728]: 2025-11-23 09:00:58.826546668 +0000 UTC m=+0.066507397 container cleanup 88facc757cd782e6c97706dbef85c8a0e0b70d2cfe3f0e1a25e3dfebca8c0066 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi)
Nov 23 04:00:58 localhost podman[106728]: ceilometer_agent_ipmi
Nov 23 04:00:58 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 23 04:00:58 localhost systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 23 04:00:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60433 DF PROTO=TCP SPT=49512 DPT=9105 SEQ=2877418657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757642990000000001030307) 
Nov 23 04:00:59 localhost python3.9[106830]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:00:59 localhost systemd[1]: Reloading.
Nov 23 04:00:59 localhost systemd-rc-local-generator[106854]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:00:59 localhost systemd-sysv-generator[106859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:00:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:00 localhost systemd[1]: Stopping collectd container...
Nov 23 04:01:04 localhost systemd[1]: libpod-90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.scope: Deactivated successfully.
Nov 23 04:01:04 localhost systemd[1]: libpod-90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.scope: Consumed 2.173s CPU time.
Nov 23 04:01:04 localhost podman[106871]: 2025-11-23 09:01:04.406493099 +0000 UTC m=+4.342548850 container died 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://www.redhat.com)
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.timer: Deactivated successfully.
Nov 23 04:01:04 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Failed to open /run/systemd/transient/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: No such file or directory
Nov 23 04:01:04 localhost systemd[1]: tmp-crun.JH065M.mount: Deactivated successfully.
Nov 23 04:01:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652-userdata-shm.mount: Deactivated successfully.
Nov 23 04:01:04 localhost podman[106871]: 2025-11-23 09:01:04.455108488 +0000 UTC m=+4.391164219 container cleanup 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 04:01:04 localhost podman[106871]: collectd
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.timer: Failed to open /run/systemd/transient/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.timer: No such file or directory
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Failed to open /run/systemd/transient/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: No such file or directory
Nov 23 04:01:04 localhost podman[106911]: 2025-11-23 09:01:04.501416754 +0000 UTC m=+0.083091889 container cleanup 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Nov 23 04:01:04 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:04 localhost systemd[1]: libpod-conmon-90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.scope: Deactivated successfully.
Nov 23 04:01:04 localhost podman[106939]: error opening file `/run/crun/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652/status`: No such file or directory
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.timer: Failed to open /run/systemd/transient/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.timer: No such file or directory
Nov 23 04:01:04 localhost systemd[1]: 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: Failed to open /run/systemd/transient/90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652.service: No such file or directory
Nov 23 04:01:04 localhost podman[106927]: 2025-11-23 09:01:04.614149875 +0000 UTC m=+0.080651725 container cleanup 90fa8fa78bedbb0ffbae571e9b5e459f75ae3e929a1f0f6f67eadf15d0267652 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:01:04 localhost podman[106927]: collectd
Nov 23 04:01:04 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'.
Nov 23 04:01:04 localhost systemd[1]: Stopped collectd container.
Nov 23 04:01:05 localhost python3.9[107034]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:05 localhost systemd[1]: var-lib-containers-storage-overlay-467d527827cab77d79ca943209511906e8bb483640c5d029ef06bb9a4c899f9d-merged.mount: Deactivated successfully.
Nov 23 04:01:05 localhost systemd[1]: Reloading.
Nov 23 04:01:05 localhost systemd-sysv-generator[107066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:05 localhost systemd-rc-local-generator[107063]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9686 DF PROTO=TCP SPT=55454 DPT=9100 SEQ=3231694529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75765AD90000000001030307) 
Nov 23 04:01:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:05 localhost systemd[1]: Stopping iscsid container...
Nov 23 04:01:05 localhost systemd[1]: libpod-aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.scope: Deactivated successfully.
Nov 23 04:01:05 localhost systemd[1]: libpod-aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.scope: Consumed 1.155s CPU time.
Nov 23 04:01:05 localhost podman[107074]: 2025-11-23 09:01:05.853877488 +0000 UTC m=+0.085798732 container died aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true)
Nov 23 04:01:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.timer: Deactivated successfully.
Nov 23 04:01:05 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.
Nov 23 04:01:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Failed to open /run/systemd/transient/aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: No such file or directory
Nov 23 04:01:05 localhost podman[107074]: 2025-11-23 09:01:05.900711329 +0000 UTC m=+0.132632533 container cleanup aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git)
Nov 23 04:01:05 localhost podman[107074]: iscsid
Nov 23 04:01:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.timer: Failed to open /run/systemd/transient/aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.timer: No such file or directory
Nov 23 04:01:05 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Failed to open /run/systemd/transient/aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: No such file or directory
Nov 23 04:01:05 localhost podman[107086]: 2025-11-23 09:01:05.942450704 +0000 UTC m=+0.079517335 container cleanup aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Nov 23 04:01:05 localhost systemd[1]: libpod-conmon-aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.scope: Deactivated successfully.
Nov 23 04:01:06 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.timer: Failed to open /run/systemd/transient/aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.timer: No such file or directory
Nov 23 04:01:06 localhost systemd[1]: aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: Failed to open /run/systemd/transient/aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb.service: No such file or directory
Nov 23 04:01:06 localhost podman[107102]: 2025-11-23 09:01:06.059410656 +0000 UTC m=+0.079557725 container cleanup aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, release=1761123044, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Nov 23 04:01:06 localhost podman[107102]: iscsid
Nov 23 04:01:06 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Nov 23 04:01:06 localhost systemd[1]: Stopped iscsid container.
Nov 23 04:01:06 localhost sshd[107115]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:01:06 localhost systemd[1]: var-lib-containers-storage-overlay-3ff2a78c8cc62b07a928d0b2b3f68754d6aca28a37f592a56866830a4a003509-merged.mount: Deactivated successfully.
Nov 23 04:01:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa7c4d56dd4f29c19a3010936b2e7ad46b28df54b0a7bd7437ee827a00d2fddb-userdata-shm.mount: Deactivated successfully.
Nov 23 04:01:06 localhost python3.9[107208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:06 localhost systemd[1]: Reloading.
Nov 23 04:01:07 localhost systemd-rc-local-generator[107232]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:07 localhost systemd-sysv-generator[107239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:07 localhost systemd[1]: Stopping logrotate_crond container...
Nov 23 04:01:07 localhost systemd[1]: libpod-4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.scope: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: libpod-4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.scope: Consumed 1.048s CPU time.
Nov 23 04:01:07 localhost podman[107249]: 2025-11-23 09:01:07.333105658 +0000 UTC m=+0.064957816 container died 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, config_id=tripleo_step4, container_name=logrotate_crond, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Nov 23 04:01:07 localhost systemd[1]: tmp-crun.8IfEAi.mount: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.timer: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Failed to open /run/systemd/transient/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: No such file or directory
Nov 23 04:01:07 localhost systemd[1]: var-lib-containers-storage-overlay-fd19aa3bbf1d46933c6539044a339b8340627ee4c1548c2610024703ba9478a8-merged.mount: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433-userdata-shm.mount: Deactivated successfully.
Nov 23 04:01:07 localhost podman[107249]: 2025-11-23 09:01:07.443962378 +0000 UTC m=+0.175814516 container cleanup 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Nov 23 04:01:07 localhost podman[107249]: logrotate_crond
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.timer: Failed to open /run/systemd/transient/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.timer: No such file or directory
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Failed to open /run/systemd/transient/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: No such file or directory
Nov 23 04:01:07 localhost podman[107262]: 2025-11-23 09:01:07.469690555 +0000 UTC m=+0.128964204 container cleanup 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044)
Nov 23 04:01:07 localhost systemd[1]: libpod-conmon-4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.scope: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 04:01:07 localhost podman[107304]: error opening file `/run/crun/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433/status`: No such file or directory
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.timer: Failed to open /run/systemd/transient/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.timer: No such file or directory
Nov 23 04:01:07 localhost systemd[1]: 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: Failed to open /run/systemd/transient/4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433.service: No such file or directory
Nov 23 04:01:07 localhost podman[107279]: 2025-11-23 09:01:07.586778112 +0000 UTC m=+0.083975833 container cleanup 4fa7f49e4e4c5c0feebf912539376e66fd099d2153773731817e366a9376f433 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:01:07 localhost podman[107279]: logrotate_crond
Nov 23 04:01:07 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Nov 23 04:01:07 localhost systemd[1]: Stopped logrotate_crond container.
Nov 23 04:01:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60434 DF PROTO=TCP SPT=49512 DPT=9105 SEQ=2877418657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757662D90000000001030307) 
Nov 23 04:01:07 localhost podman[107280]: 2025-11-23 09:01:07.645295175 +0000 UTC m=+0.137260936 container health_status 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute)
Nov 23 04:01:07 localhost podman[107280]: 2025-11-23 09:01:07.667775225 +0000 UTC m=+0.159740966 container exec_died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044)
Nov 23 04:01:07 localhost podman[107280]: unhealthy
Nov 23 04:01:07 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:07 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 04:01:08 localhost python3.9[107407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:08 localhost systemd[1]: Reloading.
Nov 23 04:01:08 localhost systemd-sysv-generator[107434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:08 localhost systemd-rc-local-generator[107430]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37734 DF PROTO=TCP SPT=50576 DPT=9102 SEQ=4003363446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757666D90000000001030307) 
Nov 23 04:01:08 localhost systemd[1]: Stopping metrics_qdr container...
Nov 23 04:01:08 localhost kernel: qdrouterd[54436]: segfault at 0 ip 00007f8e3dd987cb sp 00007fff286094c0 error 4 in libc.so.6[7f8e3dd35000+175000]
Nov 23 04:01:08 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9
Nov 23 04:01:08 localhost systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 23 04:01:08 localhost systemd[1]: Started Process Core Dump (PID 107461/UID 0).
Nov 23 04:01:08 localhost systemd-coredump[107462]: Resource limits disable core dumping for process 54436 (qdrouterd).
Nov 23 04:01:08 localhost systemd-coredump[107462]: Process 54436 (qdrouterd) of user 42465 dumped core.
Nov 23 04:01:08 localhost systemd[1]: systemd-coredump@0-107461-0.service: Deactivated successfully.
Nov 23 04:01:08 localhost podman[107448]: 2025-11-23 09:01:08.904491278 +0000 UTC m=+0.237069081 container stop c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, architecture=x86_64)
Nov 23 04:01:08 localhost systemd[1]: libpod-c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.scope: Deactivated successfully.
Nov 23 04:01:08 localhost systemd[1]: libpod-c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.scope: Consumed 29.047s CPU time.
Nov 23 04:01:08 localhost podman[107448]: 2025-11-23 09:01:08.940741767 +0000 UTC m=+0.273319590 container died c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Nov 23 04:01:08 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.timer: Deactivated successfully.
Nov 23 04:01:08 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.
Nov 23 04:01:08 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Failed to open /run/systemd/transient/c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: No such file or directory
Nov 23 04:01:08 localhost systemd[1]: tmp-crun.bOZLpX.mount: Deactivated successfully.
Nov 23 04:01:09 localhost podman[107448]: 2025-11-23 09:01:09.045323859 +0000 UTC m=+0.377901622 container cleanup c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 04:01:09 localhost podman[107448]: metrics_qdr
Nov 23 04:01:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.timer: Failed to open /run/systemd/transient/c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.timer: No such file or directory
Nov 23 04:01:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Failed to open /run/systemd/transient/c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: No such file or directory
Nov 23 04:01:09 localhost podman[107466]: 2025-11-23 09:01:09.061264275 +0000 UTC m=+0.140007510 container cleanup c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd)
Nov 23 04:01:09 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a
Nov 23 04:01:09 localhost systemd[1]: libpod-conmon-c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.scope: Deactivated successfully.
Nov 23 04:01:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.timer: Failed to open /run/systemd/transient/c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.timer: No such file or directory
Nov 23 04:01:09 localhost systemd[1]: c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: Failed to open /run/systemd/transient/c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d.service: No such file or directory
Nov 23 04:01:09 localhost podman[107478]: 2025-11-23 09:01:09.17228095 +0000 UTC m=+0.077409849 container cleanup c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '3ca07ad6f1308e0f483a5c84bda3f5ec'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Nov 23 04:01:09 localhost podman[107478]: metrics_qdr
Nov 23 04:01:09 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'.
Nov 23 04:01:09 localhost systemd[1]: Stopped metrics_qdr container.
Nov 23 04:01:09 localhost systemd[1]: var-lib-containers-storage-overlay-0e4a8a7d9871b00def826b947fe67563fec7276b8de017c820e96afd9bc15049-merged.mount: Deactivated successfully.
Nov 23 04:01:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c929af23558c40ad735ab021370b18b2bc579abe66e4f7cb4c25ae0916205b6d-userdata-shm.mount: Deactivated successfully.
Nov 23 04:01:09 localhost python3.9[107581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:10 localhost python3.9[107674]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2282 DF PROTO=TCP SPT=48284 DPT=9882 SEQ=2672972346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576711D0000000001030307) 
Nov 23 04:01:12 localhost python3.9[107767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:13 localhost python3.9[107860]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:13 localhost systemd[1]: Reloading.
Nov 23 04:01:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1267 DF PROTO=TCP SPT=47988 DPT=9101 SEQ=3690758825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757679590000000001030307) 
Nov 23 04:01:13 localhost systemd-rc-local-generator[107890]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:13 localhost systemd-sysv-generator[107893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:13 localhost systemd[1]: Stopping nova_compute container...
Nov 23 04:01:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 04:01:15 localhost systemd[1]: tmp-crun.8ef9t5.mount: Deactivated successfully.
Nov 23 04:01:15 localhost podman[107915]: 2025-11-23 09:01:15.183301301 +0000 UTC m=+0.088025601 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 04:01:15 localhost podman[107915]: 2025-11-23 09:01:15.592853128 +0000 UTC m=+0.497577428 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=)
Nov 23 04:01:15 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 04:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:01:17 localhost podman[107938]: 2025-11-23 09:01:17.182086725 +0000 UTC m=+0.086022448 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Nov 23 04:01:17 localhost podman[107938]: 2025-11-23 09:01:17.221106087 +0000 UTC m=+0.125041800 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:14:25Z)
Nov 23 04:01:17 localhost podman[107938]: unhealthy
Nov 23 04:01:17 localhost systemd[1]: tmp-crun.HA31Ty.mount: Deactivated successfully.
Nov 23 04:01:17 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:17 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:01:17 localhost podman[107939]: 2025-11-23 09:01:17.237951117 +0000 UTC m=+0.137636776 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, url=https://www.redhat.com, container_name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Nov 23 04:01:17 localhost podman[107939]: 2025-11-23 09:01:17.252568197 +0000 UTC m=+0.152253826 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container)
Nov 23 04:01:17 localhost podman[107939]: unhealthy
Nov 23 04:01:17 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:17 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:01:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1268 DF PROTO=TCP SPT=47988 DPT=9101 SEQ=3690758825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757689190000000001030307) 
Nov 23 04:01:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53054 DF PROTO=TCP SPT=49578 DPT=9100 SEQ=4147824193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757693B60000000001030307) 
Nov 23 04:01:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53056 DF PROTO=TCP SPT=49578 DPT=9100 SEQ=4147824193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75769FD90000000001030307) 
Nov 23 04:01:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11607 DF PROTO=TCP SPT=59694 DPT=9105 SEQ=1506439937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576A8190000000001030307) 
Nov 23 04:01:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11608 DF PROTO=TCP SPT=59694 DPT=9105 SEQ=1506439937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576B7D90000000001030307) 
Nov 23 04:01:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53058 DF PROTO=TCP SPT=49578 DPT=9100 SEQ=4147824193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576D0DA0000000001030307) 
Nov 23 04:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 04:01:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11609 DF PROTO=TCP SPT=59694 DPT=9105 SEQ=1506439937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576D8DA0000000001030307) 
Nov 23 04:01:37 localhost podman[107976]: Error: container 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 is not running
Nov 23 04:01:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Main process exited, code=exited, status=125/n/a
Nov 23 04:01:37 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed with result 'exit-code'.
Nov 23 04:01:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41148 DF PROTO=TCP SPT=55484 DPT=9102 SEQ=81533088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576DCD90000000001030307) 
Nov 23 04:01:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16627 DF PROTO=TCP SPT=40050 DPT=9882 SEQ=1170667418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576E64D0000000001030307) 
Nov 23 04:01:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41069 DF PROTO=TCP SPT=48564 DPT=9101 SEQ=1289206479 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576EE990000000001030307) 
Nov 23 04:01:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 04:01:46 localhost podman[108067]: 2025-11-23 09:01:46.174515857 +0000 UTC m=+0.082747850 container health_status 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 04:01:46 localhost podman[108067]: 2025-11-23 09:01:46.537229524 +0000 UTC m=+0.445461507 container exec_died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:01:46 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Deactivated successfully.
Nov 23 04:01:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41070 DF PROTO=TCP SPT=48564 DPT=9101 SEQ=1289206479 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7576FE590000000001030307) 
Nov 23 04:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:01:48 localhost podman[108091]: 2025-11-23 09:01:48.175316594 +0000 UTC m=+0.079728949 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:01:48 localhost podman[108091]: 2025-11-23 09:01:48.191475566 +0000 UTC m=+0.095887911 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 04:01:48 localhost podman[108092]: 2025-11-23 09:01:48.227464667 +0000 UTC m=+0.129355205 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:01:48 localhost podman[108091]: unhealthy
Nov 23 04:01:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:01:48 localhost podman[108092]: 2025-11-23 09:01:48.271918084 +0000 UTC m=+0.173808602 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 04:01:48 localhost podman[108092]: unhealthy
Nov 23 04:01:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:01:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:01:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12759 DF PROTO=TCP SPT=53736 DPT=9100 SEQ=225663297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757708E80000000001030307) 
Nov 23 04:01:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12761 DF PROTO=TCP SPT=53736 DPT=9100 SEQ=225663297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757714DA0000000001030307) 
Nov 23 04:01:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54339 DF PROTO=TCP SPT=35128 DPT=9105 SEQ=3456702627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75771D590000000001030307) 
Nov 23 04:01:55 localhost podman[107901]: time="2025-11-23T09:01:55Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Nov 23 04:01:55 localhost systemd[1]: tmp-crun.8VCVO7.mount: Deactivated successfully.
Nov 23 04:01:55 localhost systemd[1]: libpod-6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.scope: Deactivated successfully.
Nov 23 04:01:55 localhost systemd[1]: libpod-6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.scope: Consumed 29.152s CPU time.
Nov 23 04:01:55 localhost podman[107901]: 2025-11-23 09:01:55.829148406 +0000 UTC m=+42.111708279 container died 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 04:01:55 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.timer: Deactivated successfully.
Nov 23 04:01:55 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.
Nov 23 04:01:55 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed to open /run/systemd/transient/6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: No such file or directory
Nov 23 04:01:55 localhost systemd[1]: var-lib-containers-storage-overlay-d11023c817bbe88eb9f0fc958f07cd932ced2301bc519b0609951aaa199e4f64-merged.mount: Deactivated successfully.
Nov 23 04:01:55 localhost podman[107901]: 2025-11-23 09:01:55.892511737 +0000 UTC m=+42.175071590 container cleanup 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute)
Nov 23 04:01:55 localhost podman[107901]: nova_compute
Nov 23 04:01:55 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.timer: Failed to open /run/systemd/transient/6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.timer: No such file or directory
Nov 23 04:01:55 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed to open /run/systemd/transient/6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: No such file or directory
Nov 23 04:01:55 localhost podman[108132]: 2025-11-23 09:01:55.962759982 +0000 UTC m=+0.122202423 container cleanup 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 04:01:55 localhost systemd[1]: libpod-conmon-6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.scope: Deactivated successfully.
Nov 23 04:01:56 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.timer: Failed to open /run/systemd/transient/6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.timer: No such file or directory
Nov 23 04:01:56 localhost systemd[1]: 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: Failed to open /run/systemd/transient/6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199.service: No such file or directory
Nov 23 04:01:56 localhost podman[108146]: 2025-11-23 09:01:56.077749643 +0000 UTC m=+0.085090203 container cleanup 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Nov 23 04:01:56 localhost podman[108146]: nova_compute
Nov 23 04:01:56 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Nov 23 04:01:56 localhost systemd[1]: Stopped nova_compute container.
Nov 23 04:01:56 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.246s CPU time, no IO.
Nov 23 04:01:56 localhost python3.9[108249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:56 localhost systemd[1]: Reloading.
Nov 23 04:01:57 localhost systemd-rc-local-generator[108280]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:57 localhost systemd-sysv-generator[108284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 04:01:57 localhost systemd[1]: Stopping nova_migration_target container...
Nov 23 04:01:57 localhost recover_tripleo_nova_virtqemud[108293]: 61733
Nov 23 04:01:57 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 04:01:57 localhost systemd[1]: libpod-7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.scope: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: libpod-7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.scope: Consumed 35.451s CPU time.
Nov 23 04:01:57 localhost podman[108292]: 2025-11-23 09:01:57.461580305 +0000 UTC m=+0.080350066 container died 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public)
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.timer: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Failed to open /run/systemd/transient/7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: No such file or directory
Nov 23 04:01:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49-userdata-shm.mount: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: var-lib-containers-storage-overlay-f89b45405a25ad5b4e2d46e88df5b29e5f9747842d208330f6b1b95a66e4c65e-merged.mount: Deactivated successfully.
Nov 23 04:01:57 localhost podman[108292]: 2025-11-23 09:01:57.515788984 +0000 UTC m=+0.134558715 container cleanup 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Nov 23 04:01:57 localhost podman[108292]: nova_migration_target
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.timer: Failed to open /run/systemd/transient/7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.timer: No such file or directory
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Failed to open /run/systemd/transient/7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: No such file or directory
Nov 23 04:01:57 localhost podman[108305]: 2025-11-23 09:01:57.561317009 +0000 UTC m=+0.083327646 container cleanup 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Nov 23 04:01:57 localhost systemd[1]: libpod-conmon-7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.scope: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.timer: Failed to open /run/systemd/transient/7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.timer: No such file or directory
Nov 23 04:01:57 localhost systemd[1]: 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: Failed to open /run/systemd/transient/7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49.service: No such file or directory
Nov 23 04:01:57 localhost podman[108322]: 2025-11-23 09:01:57.676946546 +0000 UTC m=+0.081191149 container cleanup 7810ea86f74afd2ad7d2aeebf59bb98d380b15e1c15ac78476e2e5c3bd8bbd49 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container)
Nov 23 04:01:57 localhost podman[108322]: nova_migration_target
Nov 23 04:01:57 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Nov 23 04:01:57 localhost systemd[1]: Stopped nova_migration_target container.
Nov 23 04:01:58 localhost python3.9[108426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:01:58 localhost systemd[1]: Reloading.
Nov 23 04:01:58 localhost systemd-sysv-generator[108454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:01:58 localhost systemd-rc-local-generator[108450]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:01:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:01:58 localhost systemd[1]: Stopping nova_virtlogd_wrapper container...
Nov 23 04:01:58 localhost systemd[1]: libpod-53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6.scope: Deactivated successfully.
Nov 23 04:01:58 localhost podman[108467]: 2025-11-23 09:01:58.93724884 +0000 UTC m=+0.071572231 container stop 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Nov 23 04:01:58 localhost podman[108467]: 2025-11-23 09:01:58.966720358 +0000 UTC m=+0.101043749 container died 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 04:01:59 localhost podman[108467]: 2025-11-23 09:01:59.056221688 +0000 UTC m=+0.190545069 container cleanup 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, build-date=2025-11-19T00:35:22Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 04:01:59 localhost podman[108467]: nova_virtlogd_wrapper
Nov 23 04:01:59 localhost podman[108481]: 2025-11-23 09:01:59.070331244 +0000 UTC m=+0.111353114 container cleanup 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, container_name=nova_virtlogd_wrapper, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64)
Nov 23 04:01:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54340 DF PROTO=TCP SPT=35128 DPT=9105 SEQ=3456702627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75772D190000000001030307) 
Nov 23 04:01:59 localhost systemd[1]: var-lib-containers-storage-overlay-36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2-merged.mount: Deactivated successfully.
Nov 23 04:01:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6-userdata-shm.mount: Deactivated successfully.
Nov 23 04:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:02:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12763 DF PROTO=TCP SPT=53736 DPT=9100 SEQ=225663297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757744D90000000001030307) 
Nov 23 04:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:02:06 localhost sshd[108498]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:02:06 localhost sshd[108500]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:02:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54341 DF PROTO=TCP SPT=35128 DPT=9105 SEQ=3456702627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75774CD90000000001030307) 
Nov 23 04:02:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36391 DF PROTO=TCP SPT=39132 DPT=9102 SEQ=2359550290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757750D90000000001030307) 
Nov 23 04:02:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12292 DF PROTO=TCP SPT=50986 DPT=9882 SEQ=1539855050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75775B7D0000000001030307) 
Nov 23 04:02:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25441 DF PROTO=TCP SPT=59604 DPT=9101 SEQ=2770832216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757763D90000000001030307) 
Nov 23 04:02:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25442 DF PROTO=TCP SPT=59604 DPT=9101 SEQ=2770832216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577739A0000000001030307) 
Nov 23 04:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:02:18 localhost podman[108502]: 2025-11-23 09:02:18.413489913 +0000 UTC m=+0.073241817 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Nov 23 04:02:18 localhost podman[108503]: 2025-11-23 09:02:18.428634168 +0000 UTC m=+0.080725647 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller)
Nov 23 04:02:18 localhost podman[108502]: 2025-11-23 09:02:18.434958656 +0000 UTC m=+0.094710590 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Nov 23 04:02:18 localhost podman[108502]: unhealthy
Nov 23 04:02:18 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:02:18 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:02:18 localhost podman[108503]: 2025-11-23 09:02:18.477992165 +0000 UTC m=+0.130083564 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 04:02:18 localhost podman[108503]: unhealthy
Nov 23 04:02:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:02:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:02:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12660 DF PROTO=TCP SPT=58740 DPT=9100 SEQ=3012887667 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75777E160000000001030307) 
Nov 23 04:02:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12662 DF PROTO=TCP SPT=58740 DPT=9100 SEQ=3012887667 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75778A190000000001030307) 
Nov 23 04:02:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49588 DF PROTO=TCP SPT=44380 DPT=9105 SEQ=1018959846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757792990000000001030307) 
Nov 23 04:02:28 localhost sshd[108543]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:02:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49589 DF PROTO=TCP SPT=44380 DPT=9105 SEQ=1018959846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577A2590000000001030307) 
Nov 23 04:02:33 localhost sshd[108545]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:02:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12664 DF PROTO=TCP SPT=58740 DPT=9100 SEQ=3012887667 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577BAD90000000001030307) 
Nov 23 04:02:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49590 DF PROTO=TCP SPT=44380 DPT=9105 SEQ=1018959846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577C2D90000000001030307) 
Nov 23 04:02:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61472 DF PROTO=TCP SPT=41484 DPT=9102 SEQ=3721229031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577C6DA0000000001030307) 
Nov 23 04:02:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52705 DF PROTO=TCP SPT=34762 DPT=9882 SEQ=2887639648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577D0AD0000000001030307) 
Nov 23 04:02:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11903 DF PROTO=TCP SPT=60648 DPT=9101 SEQ=3408436320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577D9190000000001030307) 
Nov 23 04:02:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11904 DF PROTO=TCP SPT=60648 DPT=9101 SEQ=3408436320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577E8DA0000000001030307) 
Nov 23 04:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:02:48 localhost podman[108624]: 2025-11-23 09:02:48.68527867 +0000 UTC m=+0.085438782 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044)
Nov 23 04:02:48 localhost podman[108624]: 2025-11-23 09:02:48.703879087 +0000 UTC m=+0.104039239 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Nov 23 04:02:48 localhost systemd[1]: tmp-crun.iH3q4S.mount: Deactivated successfully.
Nov 23 04:02:48 localhost podman[108625]: 2025-11-23 09:02:48.737453603 +0000 UTC m=+0.133808704 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, version=17.1.12, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4)
Nov 23 04:02:48 localhost podman[108625]: 2025-11-23 09:02:48.751717764 +0000 UTC m=+0.148072915 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Nov 23 04:02:48 localhost podman[108625]: unhealthy
Nov 23 04:02:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:02:48 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:02:48 localhost podman[108624]: unhealthy
Nov 23 04:02:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:02:48 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:02:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10508 DF PROTO=TCP SPT=33540 DPT=9100 SEQ=3879980728 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577F3470000000001030307) 
Nov 23 04:02:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10510 DF PROTO=TCP SPT=33540 DPT=9100 SEQ=3879980728 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7577FF590000000001030307) 
Nov 23 04:02:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=83 DF PROTO=TCP SPT=42640 DPT=9105 SEQ=1046434936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757807990000000001030307) 
Nov 23 04:02:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=84 DF PROTO=TCP SPT=42640 DPT=9105 SEQ=1046434936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578175A0000000001030307) 
Nov 23 04:03:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10512 DF PROTO=TCP SPT=33540 DPT=9100 SEQ=3879980728 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75782EDA0000000001030307) 
Nov 23 04:03:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=85 DF PROTO=TCP SPT=42640 DPT=9105 SEQ=1046434936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757836DA0000000001030307) 
Nov 23 04:03:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17290 DF PROTO=TCP SPT=56640 DPT=9102 SEQ=214449554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75783ADA0000000001030307) 
Nov 23 04:03:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23265 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=735475754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757845DC0000000001030307) 
Nov 23 04:03:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42162 DF PROTO=TCP SPT=56314 DPT=9101 SEQ=2564268430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75784E190000000001030307) 
Nov 23 04:03:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Nov 23 04:03:16 localhost recover_tripleo_nova_virtqemud[108663]: 61733
Nov 23 04:03:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Nov 23 04:03:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Nov 23 04:03:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42163 DF PROTO=TCP SPT=56314 DPT=9101 SEQ=2564268430 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75785DDA0000000001030307) 
Nov 23 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:03:18 localhost podman[108664]: 2025-11-23 09:03:18.913803949 +0000 UTC m=+0.073563525 container health_status 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Nov 23 04:03:18 localhost podman[108665]: 2025-11-23 09:03:18.957204799 +0000 UTC m=+0.111734495 container health_status 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 04:03:18 localhost podman[108665]: 2025-11-23 09:03:18.976903405 +0000 UTC m=+0.131433091 container exec_died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible)
Nov 23 04:03:18 localhost podman[108665]: unhealthy
Nov 23 04:03:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:03:18 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed with result 'exit-code'.
Nov 23 04:03:19 localhost podman[108664]: 2025-11-23 09:03:19.010142971 +0000 UTC m=+0.169902567 container exec_died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Nov 23 04:03:19 localhost podman[108664]: unhealthy
Nov 23 04:03:19 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:03:19 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed with result 'exit-code'.
Nov 23 04:03:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41271 DF PROTO=TCP SPT=39956 DPT=9100 SEQ=1842815768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757868760000000001030307) 
Nov 23 04:03:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Nov 23 04:03:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 60950 (conmon) with signal SIGKILL.
Nov 23 04:03:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Nov 23 04:03:23 localhost systemd[1]: libpod-conmon-53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6.scope: Deactivated successfully.
Nov 23 04:03:23 localhost podman[108715]: error opening file `/run/crun/53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6/status`: No such file or directory
Nov 23 04:03:23 localhost podman[108702]: 2025-11-23 09:03:23.147180763 +0000 UTC m=+0.056393377 container cleanup 53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 04:03:23 localhost podman[108702]: nova_virtlogd_wrapper
Nov 23 04:03:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Nov 23 04:03:23 localhost systemd[1]: Stopped nova_virtlogd_wrapper container.
Nov 23 04:03:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41273 DF PROTO=TCP SPT=39956 DPT=9100 SEQ=1842815768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757874990000000001030307) 
Nov 23 04:03:23 localhost python3.9[108808]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:23 localhost systemd[1]: Reloading.
Nov 23 04:03:24 localhost systemd-rc-local-generator[108836]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:24 localhost systemd-sysv-generator[108840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:24 localhost systemd[1]: Stopping nova_virtnodedevd container...
Nov 23 04:03:24 localhost systemd[1]: libpod-6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b.scope: Deactivated successfully.
Nov 23 04:03:24 localhost systemd[1]: libpod-6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b.scope: Consumed 1.565s CPU time.
Nov 23 04:03:24 localhost podman[108849]: 2025-11-23 09:03:24.323967616 +0000 UTC m=+0.060482875 container died 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Nov 23 04:03:24 localhost podman[108849]: 2025-11-23 09:03:24.362967888 +0000 UTC m=+0.099483147 container cleanup 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtnodedevd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Nov 23 04:03:24 localhost podman[108849]: nova_virtnodedevd
Nov 23 04:03:24 localhost podman[108864]: 2025-11-23 09:03:24.402972655 +0000 UTC m=+0.070077662 container cleanup 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, url=https://www.redhat.com, config_id=tripleo_step3, container_name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 04:03:24 localhost systemd[1]: libpod-conmon-6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b.scope: Deactivated successfully.
Nov 23 04:03:24 localhost podman[108891]: error opening file `/run/crun/6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b/status`: No such file or directory
Nov 23 04:03:24 localhost podman[108880]: 2025-11-23 09:03:24.501900058 +0000 UTC m=+0.068787078 container cleanup 6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtnodedevd, managed_by=tripleo_ansible)
Nov 23 04:03:24 localhost podman[108880]: nova_virtnodedevd
Nov 23 04:03:24 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Nov 23 04:03:24 localhost systemd[1]: Stopped nova_virtnodedevd container.
Nov 23 04:03:25 localhost python3.9[108984]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:25 localhost systemd[1]: var-lib-containers-storage-overlay-3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519-merged.mount: Deactivated successfully.
Nov 23 04:03:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6fc8d41046970eb28f19fe4f3e2c7d5bfd8ebf4c589996a4953bdaeec91e398b-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:25 localhost systemd[1]: Reloading.
Nov 23 04:03:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44430 DF PROTO=TCP SPT=36480 DPT=9105 SEQ=2160180403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75787CDA0000000001030307) 
Nov 23 04:03:25 localhost systemd-sysv-generator[109015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:25 localhost systemd-rc-local-generator[109012]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:25 localhost systemd[1]: Stopping nova_virtproxyd container...
Nov 23 04:03:25 localhost systemd[1]: libpod-ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824.scope: Deactivated successfully.
Nov 23 04:03:25 localhost podman[109026]: 2025-11-23 09:03:25.708095646 +0000 UTC m=+0.059248823 container died ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:03:25 localhost systemd[1]: tmp-crun.bDuUZ9.mount: Deactivated successfully.
Nov 23 04:03:25 localhost podman[109026]: 2025-11-23 09:03:25.812523325 +0000 UTC m=+0.163676502 container cleanup ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_virtproxyd)
Nov 23 04:03:25 localhost podman[109026]: nova_virtproxyd
Nov 23 04:03:25 localhost podman[109039]: 2025-11-23 09:03:25.827970317 +0000 UTC m=+0.109896875 container cleanup ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, container_name=nova_virtproxyd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.openshift.expose-services=)
Nov 23 04:03:25 localhost systemd[1]: libpod-conmon-ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824.scope: Deactivated successfully.
Nov 23 04:03:25 localhost podman[109072]: error opening file `/run/crun/ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824/status`: No such file or directory
Nov 23 04:03:25 localhost podman[109059]: 2025-11-23 09:03:25.931837761 +0000 UTC m=+0.065887280 container cleanup ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtproxyd, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.12, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container)
Nov 23 04:03:25 localhost podman[109059]: nova_virtproxyd
Nov 23 04:03:25 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Nov 23 04:03:25 localhost systemd[1]: Stopped nova_virtproxyd container.
Nov 23 04:03:26 localhost systemd[1]: tmp-crun.gVgc40.mount: Deactivated successfully.
Nov 23 04:03:26 localhost systemd[1]: var-lib-containers-storage-overlay-ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b-merged.mount: Deactivated successfully.
Nov 23 04:03:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea769dd192dead778404a1ca1445690dabf4a79e9ef3c9797525e826ee357824-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:26 localhost python3.9[109165]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:27 localhost systemd[1]: Reloading.
Nov 23 04:03:27 localhost systemd-sysv-generator[109192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:27 localhost systemd-rc-local-generator[109188]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Nov 23 04:03:28 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Nov 23 04:03:28 localhost systemd[1]: Stopping nova_virtqemud container...
Nov 23 04:03:28 localhost systemd[1]: tmp-crun.aq6avi.mount: Deactivated successfully.
Nov 23 04:03:28 localhost systemd[1]: libpod-65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690.scope: Deactivated successfully.
Nov 23 04:03:28 localhost systemd[1]: libpod-65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690.scope: Consumed 2.318s CPU time.
Nov 23 04:03:28 localhost podman[109206]: 2025-11-23 09:03:28.132067664 +0000 UTC m=+0.081105476 container died 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 04:03:28 localhost podman[109206]: 2025-11-23 09:03:28.162109496 +0000 UTC m=+0.111147298 container cleanup 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step3, container_name=nova_virtqemud, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true)
Nov 23 04:03:28 localhost podman[109206]: nova_virtqemud
Nov 23 04:03:28 localhost podman[109221]: 2025-11-23 09:03:28.181224396 +0000 UTC m=+0.043926733 container cleanup 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, container_name=nova_virtqemud, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt)
Nov 23 04:03:28 localhost systemd[1]: libpod-conmon-65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690.scope: Deactivated successfully.
Nov 23 04:03:28 localhost podman[109249]: error opening file `/run/crun/65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690/status`: No such file or directory
Nov 23 04:03:28 localhost podman[109237]: 2025-11-23 09:03:28.253658491 +0000 UTC m=+0.046418831 container cleanup 65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, container_name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:03:28 localhost podman[109237]: nova_virtqemud
Nov 23 04:03:28 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully.
Nov 23 04:03:28 localhost systemd[1]: Stopped nova_virtqemud container.
Nov 23 04:03:28 localhost sshd[109321]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:03:29 localhost python3.9[109344]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:29 localhost systemd[1]: var-lib-containers-storage-overlay-61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac-merged.mount: Deactivated successfully.
Nov 23 04:03:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:29 localhost systemd[1]: Reloading.
Nov 23 04:03:29 localhost systemd-rc-local-generator[109373]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:29 localhost systemd-sysv-generator[109377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44431 DF PROTO=TCP SPT=36480 DPT=9105 SEQ=2160180403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75788C990000000001030307) 
Nov 23 04:03:30 localhost python3.9[109474]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:30 localhost systemd[1]: Reloading.
Nov 23 04:03:30 localhost systemd-rc-local-generator[109501]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:30 localhost systemd-sysv-generator[109506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:30 localhost systemd[1]: Stopping nova_virtsecretd container...
Nov 23 04:03:30 localhost systemd[1]: libpod-71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5.scope: Deactivated successfully.
Nov 23 04:03:30 localhost podman[109515]: 2025-11-23 09:03:30.681259705 +0000 UTC m=+0.078405004 container died 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_virtsecretd)
Nov 23 04:03:30 localhost systemd[1]: tmp-crun.CPlXKS.mount: Deactivated successfully.
Nov 23 04:03:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:30 localhost podman[109515]: 2025-11-23 09:03:30.717278326 +0000 UTC m=+0.114423625 container cleanup 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, tcib_managed=true, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_virtsecretd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:03:30 localhost podman[109515]: nova_virtsecretd
Nov 23 04:03:30 localhost podman[109529]: 2025-11-23 09:03:30.764108947 +0000 UTC m=+0.073395481 container cleanup 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:03:30 localhost systemd[1]: libpod-conmon-71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5.scope: Deactivated successfully.
Nov 23 04:03:30 localhost podman[109557]: error opening file `/run/crun/71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5/status`: No such file or directory
Nov 23 04:03:30 localhost podman[109544]: 2025-11-23 09:03:30.875133202 +0000 UTC m=+0.072148608 container cleanup 71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Nov 23 04:03:30 localhost podman[109544]: nova_virtsecretd
Nov 23 04:03:30 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Nov 23 04:03:30 localhost systemd[1]: Stopped nova_virtsecretd container.
Nov 23 04:03:31 localhost systemd[1]: var-lib-containers-storage-overlay-f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae-merged.mount: Deactivated successfully.
Nov 23 04:03:31 localhost python3.9[109650]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:31 localhost systemd[1]: Reloading.
Nov 23 04:03:31 localhost systemd-rc-local-generator[109674]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:31 localhost systemd-sysv-generator[109681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:32 localhost systemd[1]: Stopping nova_virtstoraged container...
Nov 23 04:03:32 localhost systemd[1]: libpod-bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071.scope: Deactivated successfully.
Nov 23 04:03:32 localhost podman[109691]: 2025-11-23 09:03:32.191688098 +0000 UTC m=+0.067936515 container died bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, architecture=x86_64)
Nov 23 04:03:32 localhost podman[109691]: 2025-11-23 09:03:32.23034968 +0000 UTC m=+0.106598067 container cleanup bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, container_name=nova_virtstoraged, io.openshift.expose-services=, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, build-date=2025-11-19T00:35:22Z)
Nov 23 04:03:32 localhost podman[109691]: nova_virtstoraged
Nov 23 04:03:32 localhost podman[109706]: 2025-11-23 09:03:32.280372655 +0000 UTC m=+0.072584848 container cleanup bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtstoraged, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Nov 23 04:03:32 localhost systemd[1]: libpod-conmon-bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071.scope: Deactivated successfully.
Nov 23 04:03:32 localhost podman[109735]: error opening file `/run/crun/bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071/status`: No such file or directory
Nov 23 04:03:32 localhost podman[109723]: 2025-11-23 09:03:32.38275176 +0000 UTC m=+0.071153911 container cleanup bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '54a97af4633bfad00758ecf55e783ce2'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtstoraged)
Nov 23 04:03:32 localhost podman[109723]: nova_virtstoraged
Nov 23 04:03:32 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Nov 23 04:03:32 localhost systemd[1]: Stopped nova_virtstoraged container.
Nov 23 04:03:32 localhost systemd[1]: var-lib-containers-storage-overlay-93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe-merged.mount: Deactivated successfully.
Nov 23 04:03:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:33 localhost sshd[109829]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:03:33 localhost python3.9[109828]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:33 localhost systemd[1]: Reloading.
Nov 23 04:03:33 localhost systemd-rc-local-generator[109853]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:33 localhost systemd-sysv-generator[109856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:33 localhost systemd[1]: Stopping ovn_controller container...
Nov 23 04:03:33 localhost systemd[1]: libpod-838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.scope: Deactivated successfully.
Nov 23 04:03:33 localhost systemd[1]: libpod-838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.scope: Consumed 2.710s CPU time.
Nov 23 04:03:33 localhost podman[109870]: 2025-11-23 09:03:33.647129052 +0000 UTC m=+0.082998457 container died 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4)
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.timer: Deactivated successfully.
Nov 23 04:03:33 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed to open /run/systemd/transient/838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: No such file or directory
Nov 23 04:03:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:33 localhost systemd[1]: var-lib-containers-storage-overlay-9d6c9a2cd921c771320ccd8c605179d677283c241110a1e27dd4733dcdcf4da2-merged.mount: Deactivated successfully.
Nov 23 04:03:33 localhost podman[109870]: 2025-11-23 09:03:33.694816675 +0000 UTC m=+0.130686040 container cleanup 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, architecture=x86_64, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Nov 23 04:03:33 localhost podman[109870]: ovn_controller
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.timer: Failed to open /run/systemd/transient/838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.timer: No such file or directory
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed to open /run/systemd/transient/838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: No such file or directory
Nov 23 04:03:33 localhost podman[109884]: 2025-11-23 09:03:33.734244758 +0000 UTC m=+0.073427682 container cleanup 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, container_name=ovn_controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Nov 23 04:03:33 localhost systemd[1]: libpod-conmon-838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.scope: Deactivated successfully.
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.timer: Failed to open /run/systemd/transient/838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.timer: No such file or directory
Nov 23 04:03:33 localhost systemd[1]: 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: Failed to open /run/systemd/transient/838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd.service: No such file or directory
Nov 23 04:03:33 localhost podman[109897]: 2025-11-23 09:03:33.840082804 +0000 UTC m=+0.071247663 container cleanup 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Nov 23 04:03:33 localhost podman[109897]: ovn_controller
Nov 23 04:03:33 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Nov 23 04:03:33 localhost systemd[1]: Stopped ovn_controller container.
Nov 23 04:03:34 localhost python3.9[110001]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:34 localhost systemd[1]: Reloading.
Nov 23 04:03:34 localhost systemd-rc-local-generator[110028]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:34 localhost systemd-sysv-generator[110032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:35 localhost systemd[1]: Stopping ovn_metadata_agent container...
Nov 23 04:03:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41275 DF PROTO=TCP SPT=39956 DPT=9100 SEQ=1842815768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578A4D90000000001030307) 
Nov 23 04:03:35 localhost systemd[1]: libpod-21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.scope: Deactivated successfully.
Nov 23 04:03:35 localhost systemd[1]: libpod-21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.scope: Consumed 9.688s CPU time.
Nov 23 04:03:35 localhost podman[110043]: 2025-11-23 09:03:35.774692944 +0000 UTC m=+0.672358554 container died 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:03:35 localhost systemd[1]: tmp-crun.tqoZou.mount: Deactivated successfully.
Nov 23 04:03:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.timer: Deactivated successfully.
Nov 23 04:03:35 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.
Nov 23 04:03:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed to open /run/systemd/transient/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: No such file or directory
Nov 23 04:03:35 localhost systemd[1]: tmp-crun.5byL6M.mount: Deactivated successfully.
Nov 23 04:03:35 localhost podman[110043]: 2025-11-23 09:03:35.852899733 +0000 UTC m=+0.750565343 container cleanup 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Nov 23 04:03:35 localhost podman[110043]: ovn_metadata_agent
Nov 23 04:03:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.timer: Failed to open /run/systemd/transient/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.timer: No such file or directory
Nov 23 04:03:35 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed to open /run/systemd/transient/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: No such file or directory
Nov 23 04:03:35 localhost podman[110056]: 2025-11-23 09:03:35.911001064 +0000 UTC m=+0.127631689 container cleanup 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible)
Nov 23 04:03:35 localhost systemd[1]: libpod-conmon-21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.scope: Deactivated successfully.
Nov 23 04:03:36 localhost podman[110087]: error opening file `/run/crun/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168/status`: No such file or directory
Nov 23 04:03:36 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.timer: Failed to open /run/systemd/transient/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.timer: No such file or directory
Nov 23 04:03:36 localhost systemd[1]: 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: Failed to open /run/systemd/transient/21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168.service: No such file or directory
Nov 23 04:03:36 localhost podman[110074]: 2025-11-23 09:03:36.02955765 +0000 UTC m=+0.087125917 container cleanup 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Nov 23 04:03:36 localhost podman[110074]: ovn_metadata_agent
Nov 23 04:03:36 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Nov 23 04:03:36 localhost systemd[1]: Stopped ovn_metadata_agent container.
Nov 23 04:03:36 localhost systemd[1]: var-lib-containers-storage-overlay-b71ea05df59e3d7e6164ad14b9e6ef7192ae917ae3414df8fb50b3972aecb677-merged.mount: Deactivated successfully.
Nov 23 04:03:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168-userdata-shm.mount: Deactivated successfully.
Nov 23 04:03:36 localhost python3.9[110180]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:03:37 localhost systemd[1]: Reloading.
Nov 23 04:03:37 localhost systemd-rc-local-generator[110209]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:03:37 localhost systemd-sysv-generator[110212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:03:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:03:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44432 DF PROTO=TCP SPT=36480 DPT=9105 SEQ=2160180403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578ACD90000000001030307) 
Nov 23 04:03:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59731 DF PROTO=TCP SPT=36840 DPT=9102 SEQ=57572344 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578B0D90000000001030307) 
Nov 23 04:03:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36172 DF PROTO=TCP SPT=37146 DPT=9882 SEQ=3482017400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578BB0D0000000001030307) 
Nov 23 04:03:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51453 DF PROTO=TCP SPT=43668 DPT=9101 SEQ=1291401658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578C35A0000000001030307) 
Nov 23 04:03:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51454 DF PROTO=TCP SPT=43668 DPT=9101 SEQ=1291401658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578D3190000000001030307) 
Nov 23 04:03:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64027 DF PROTO=TCP SPT=39588 DPT=9100 SEQ=3598035678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578DDA70000000001030307) 
Nov 23 04:03:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64029 DF PROTO=TCP SPT=39588 DPT=9100 SEQ=3598035678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578E9990000000001030307) 
Nov 23 04:03:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59433 DF PROTO=TCP SPT=57814 DPT=9105 SEQ=446071967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7578F2190000000001030307) 
Nov 23 04:03:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59434 DF PROTO=TCP SPT=57814 DPT=9105 SEQ=446071967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757901DA0000000001030307) 
Nov 23 04:04:01 localhost sshd[110311]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:04:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64031 DF PROTO=TCP SPT=39588 DPT=9100 SEQ=3598035678 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757918D90000000001030307) 
Nov 23 04:04:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59435 DF PROTO=TCP SPT=57814 DPT=9105 SEQ=446071967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757922D90000000001030307) 
Nov 23 04:04:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5605 DF PROTO=TCP SPT=45414 DPT=9102 SEQ=2451984844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757926DA0000000001030307) 
Nov 23 04:04:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31090 DF PROTO=TCP SPT=32988 DPT=9882 SEQ=2584095046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579303C0000000001030307) 
Nov 23 04:04:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5108 DF PROTO=TCP SPT=36552 DPT=9101 SEQ=3155385709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757938990000000001030307) 
Nov 23 04:04:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5109 DF PROTO=TCP SPT=36552 DPT=9101 SEQ=3155385709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757948590000000001030307) 
Nov 23 04:04:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25783 DF PROTO=TCP SPT=48268 DPT=9100 SEQ=4060426849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757952D70000000001030307) 
Nov 23 04:04:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25785 DF PROTO=TCP SPT=48268 DPT=9100 SEQ=4060426849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75795ED90000000001030307) 
Nov 23 04:04:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51854 DF PROTO=TCP SPT=42874 DPT=9105 SEQ=4053750169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757967590000000001030307) 
Nov 23 04:04:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51855 DF PROTO=TCP SPT=42874 DPT=9105 SEQ=4053750169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757977190000000001030307) 
Nov 23 04:04:32 localhost sshd[110313]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:04:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25787 DF PROTO=TCP SPT=48268 DPT=9100 SEQ=4060426849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75798ED90000000001030307) 
Nov 23 04:04:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51856 DF PROTO=TCP SPT=42874 DPT=9105 SEQ=4053750169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757996D90000000001030307) 
Nov 23 04:04:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11797 DF PROTO=TCP SPT=50384 DPT=9102 SEQ=2881331814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75799AD90000000001030307) 
Nov 23 04:04:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15683 DF PROTO=TCP SPT=35344 DPT=9882 SEQ=3087196774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579A56C0000000001030307) 
Nov 23 04:04:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19370 DF PROTO=TCP SPT=43422 DPT=9101 SEQ=2988805467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579ADD90000000001030307) 
Nov 23 04:04:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19371 DF PROTO=TCP SPT=43422 DPT=9101 SEQ=2988805467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579BD990000000001030307) 
Nov 23 04:04:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40342 DF PROTO=TCP SPT=51982 DPT=9100 SEQ=1206437352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579C8070000000001030307) 
Nov 23 04:04:52 localhost sshd[110441]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:04:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40344 DF PROTO=TCP SPT=51982 DPT=9100 SEQ=1206437352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579D4190000000001030307) 
Nov 23 04:04:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16198 DF PROTO=TCP SPT=53698 DPT=9105 SEQ=872459467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579DC590000000001030307) 
Nov 23 04:04:57 localhost systemd[1]: session-36.scope: Deactivated successfully.
Nov 23 04:04:57 localhost systemd[1]: session-36.scope: Consumed 19.202s CPU time.
Nov 23 04:04:57 localhost systemd-logind[761]: Session 36 logged out. Waiting for processes to exit.
Nov 23 04:04:57 localhost systemd-logind[761]: Removed session 36.
Nov 23 04:04:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16199 DF PROTO=TCP SPT=53698 DPT=9105 SEQ=872459467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7579EC190000000001030307) 
Nov 23 04:05:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40346 DF PROTO=TCP SPT=51982 DPT=9100 SEQ=1206437352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A04DA0000000001030307) 
Nov 23 04:05:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16200 DF PROTO=TCP SPT=53698 DPT=9105 SEQ=872459467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A0CDA0000000001030307) 
Nov 23 04:05:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62601 DF PROTO=TCP SPT=58718 DPT=9102 SEQ=2569475477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A10DA0000000001030307) 
Nov 23 04:05:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2153 DF PROTO=TCP SPT=60914 DPT=9882 SEQ=3749174108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A1A9C0000000001030307) 
Nov 23 04:05:12 localhost sshd[110443]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:05:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49528 DF PROTO=TCP SPT=57810 DPT=9101 SEQ=1191456919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A22D90000000001030307) 
Nov 23 04:05:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49529 DF PROTO=TCP SPT=57810 DPT=9101 SEQ=1191456919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A32990000000001030307) 
Nov 23 04:05:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42285 DF PROTO=TCP SPT=48202 DPT=9100 SEQ=32029711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A3D370000000001030307) 
Nov 23 04:05:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42287 DF PROTO=TCP SPT=48202 DPT=9100 SEQ=32029711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A49590000000001030307) 
Nov 23 04:05:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40644 DF PROTO=TCP SPT=37378 DPT=9105 SEQ=157125088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A519A0000000001030307) 
Nov 23 04:05:28 localhost sshd[110445]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:05:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40645 DF PROTO=TCP SPT=37378 DPT=9105 SEQ=157125088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A61590000000001030307) 
Nov 23 04:05:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42289 DF PROTO=TCP SPT=48202 DPT=9100 SEQ=32029711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A78D90000000001030307) 
Nov 23 04:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40646 DF PROTO=TCP SPT=37378 DPT=9105 SEQ=157125088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A80D90000000001030307) 
Nov 23 04:05:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37062 DF PROTO=TCP SPT=42390 DPT=9102 SEQ=849256589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A84DA0000000001030307) 
Nov 23 04:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48284 DF PROTO=TCP SPT=39080 DPT=9882 SEQ=1957975812 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A8FCC0000000001030307) 
Nov 23 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26558 DF PROTO=TCP SPT=50214 DPT=9101 SEQ=1123765009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757A98190000000001030307) 
Nov 23 04:05:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26559 DF PROTO=TCP SPT=50214 DPT=9101 SEQ=1123765009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AA7D90000000001030307) 
Nov 23 04:05:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26271 DF PROTO=TCP SPT=35550 DPT=9100 SEQ=3967697947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AB2670000000001030307) 
Nov 23 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26273 DF PROTO=TCP SPT=35550 DPT=9100 SEQ=3967697947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757ABE590000000001030307) 
Nov 23 04:05:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45047 DF PROTO=TCP SPT=36304 DPT=9105 SEQ=1848353342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AC6D90000000001030307) 
Nov 23 04:05:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45048 DF PROTO=TCP SPT=36304 DPT=9105 SEQ=1848353342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AD6990000000001030307) 
Nov 23 04:06:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26275 DF PROTO=TCP SPT=35550 DPT=9100 SEQ=3967697947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AEED90000000001030307) 
Nov 23 04:06:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45049 DF PROTO=TCP SPT=36304 DPT=9105 SEQ=1848353342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AF6DA0000000001030307) 
Nov 23 04:06:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3564 DF PROTO=TCP SPT=54502 DPT=9102 SEQ=611004544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757AFAD90000000001030307) 
Nov 23 04:06:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32420 DF PROTO=TCP SPT=42852 DPT=9882 SEQ=3426333742 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B04FC0000000001030307) 
Nov 23 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=162 DF PROTO=TCP SPT=55528 DPT=9101 SEQ=265876468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B0D590000000001030307) 
Nov 23 04:06:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=163 DF PROTO=TCP SPT=55528 DPT=9101 SEQ=265876468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B1D190000000001030307) 
Nov 23 04:06:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42829 DF PROTO=TCP SPT=38760 DPT=9100 SEQ=2869776106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B27960000000001030307) 
Nov 23 04:06:20 localhost sshd[110523]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:06:22 localhost sshd[110525]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:06:22 localhost systemd-logind[761]: New session 37 of user zuul.
Nov 23 04:06:22 localhost systemd[1]: Started Session 37 of User zuul.
Nov 23 04:06:22 localhost python3.9[110606]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42831 DF PROTO=TCP SPT=38760 DPT=9100 SEQ=2869776106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B339A0000000001030307) 
Nov 23 04:06:23 localhost python3.9[110698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:24 localhost python3.9[110790]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:24 localhost python3.9[110882]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:25 localhost python3.9[110974]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27228 DF PROTO=TCP SPT=43820 DPT=9105 SEQ=3409284810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B3C1A0000000001030307) 
Nov 23 04:06:25 localhost python3.9[111066]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:26 localhost python3.9[111158]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:27 localhost python3.9[111250]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:27 localhost python3.9[111342]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:28 localhost python3.9[111434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:28 localhost python3.9[111526]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27229 DF PROTO=TCP SPT=43820 DPT=9105 SEQ=3409284810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B4BD90000000001030307) 
Nov 23 04:06:29 localhost python3.9[111618]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:30 localhost python3.9[111710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:30 localhost python3.9[111802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:31 localhost python3.9[111894]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:32 localhost python3.9[111986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:32 localhost python3.9[112078]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:33 localhost python3.9[112170]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:33 localhost python3.9[112262]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:34 localhost python3.9[112354]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:35 localhost python3.9[112446]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42833 DF PROTO=TCP SPT=38760 DPT=9100 SEQ=2869776106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B62D90000000001030307) 
Nov 23 04:06:36 localhost python3.9[112538]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:36 localhost python3.9[112630]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:37 localhost python3.9[112722]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27230 DF PROTO=TCP SPT=43820 DPT=9105 SEQ=3409284810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B6CDA0000000001030307) 
Nov 23 04:06:38 localhost python3.9[112814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21017 DF PROTO=TCP SPT=47886 DPT=9102 SEQ=1288019308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B70D90000000001030307) 
Nov 23 04:06:38 localhost python3.9[112906]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:39 localhost python3.9[112998]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:40 localhost python3.9[113090]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:40 localhost python3.9[113182]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37050 DF PROTO=TCP SPT=37710 DPT=9882 SEQ=3093346641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B7A2E0000000001030307) 
Nov 23 04:06:41 localhost python3.9[113274]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:42 localhost python3.9[113366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:42 localhost python3.9[113458]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29246 DF PROTO=TCP SPT=32848 DPT=9101 SEQ=3615073835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B82990000000001030307) 
Nov 23 04:06:43 localhost python3.9[113550]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:44 localhost python3.9[113642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:44 localhost python3.9[113734]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:45 localhost python3.9[113826]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:46 localhost python3.9[113918]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:46 localhost python3.9[114010]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:47 localhost python3.9[114102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29247 DF PROTO=TCP SPT=32848 DPT=9101 SEQ=3615073835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B92590000000001030307) 
Nov 23 04:06:48 localhost python3.9[114194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:48 localhost python3.9[114286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:49 localhost python3.9[114378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:06:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54233 DF PROTO=TCP SPT=56662 DPT=9100 SEQ=916017196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757B9CC70000000001030307) 
Nov 23 04:06:50 localhost python3.9[114470]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:51 localhost python3.9[114562]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:06:52 localhost python3.9[114654]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:06:52 localhost systemd[1]: Reloading.
Nov 23 04:06:52 localhost systemd-sysv-generator[114708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:06:52 localhost systemd-rc-local-generator[114705]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:06:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:06:52 localhost podman[114852]: 2025-11-23 09:06:52.98385307 +0000 UTC m=+0.110460511 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph)
Nov 23 04:06:53 localhost podman[114852]: 2025-11-23 09:06:53.094095484 +0000 UTC m=+0.220702915 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True)
Nov 23 04:06:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54235 DF PROTO=TCP SPT=56662 DPT=9100 SEQ=916017196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BA8DA0000000001030307) 
Nov 23 04:06:53 localhost python3.9[114903]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:53 localhost python3.9[115075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:54 localhost python3.9[115199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:54 localhost sshd[115307]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:06:55 localhost python3.9[115308]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62623 DF PROTO=TCP SPT=47708 DPT=9105 SEQ=1402763533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BB1190000000001030307) 
Nov 23 04:06:56 localhost python3.9[115402]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:57 localhost python3.9[115495]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:58 localhost python3.9[115588]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:58 localhost python3.9[115682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:06:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62624 DF PROTO=TCP SPT=47708 DPT=9105 SEQ=1402763533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BC0D90000000001030307) 
Nov 23 04:06:59 localhost python3.9[115775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:00 localhost python3.9[115868]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:00 localhost sshd[115962]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:07:00 localhost python3.9[115961]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:01 localhost python3.9[116056]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:02 localhost python3.9[116149]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:02 localhost python3.9[116242]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:03 localhost python3.9[116335]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:04 localhost python3.9[116428]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:04 localhost python3.9[116521]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:05 localhost python3.9[116614]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54237 DF PROTO=TCP SPT=56662 DPT=9100 SEQ=916017196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BD8D90000000001030307) 
Nov 23 04:07:06 localhost python3.9[116707]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:06 localhost python3.9[116800]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:07 localhost python3.9[116893]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62625 DF PROTO=TCP SPT=47708 DPT=9105 SEQ=1402763533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BE0D90000000001030307) 
Nov 23 04:07:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12247 DF PROTO=TCP SPT=55594 DPT=9102 SEQ=2539247747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BE4D90000000001030307) 
Nov 23 04:07:11 localhost systemd[1]: session-37.scope: Deactivated successfully.
Nov 23 04:07:11 localhost systemd[1]: session-37.scope: Consumed 32.052s CPU time.
Nov 23 04:07:11 localhost systemd-logind[761]: Session 37 logged out. Waiting for processes to exit.
Nov 23 04:07:11 localhost systemd-logind[761]: Removed session 37.
Nov 23 04:07:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62346 DF PROTO=TCP SPT=41206 DPT=9882 SEQ=3910559801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BEF5C0000000001030307) 
Nov 23 04:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18004 DF PROTO=TCP SPT=47706 DPT=9101 SEQ=1877523252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757BF7990000000001030307) 
Nov 23 04:07:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18005 DF PROTO=TCP SPT=47706 DPT=9101 SEQ=1877523252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C07590000000001030307) 
Nov 23 04:07:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47185 DF PROTO=TCP SPT=36892 DPT=9100 SEQ=1220773812 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C11F70000000001030307) 
Nov 23 04:07:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47187 DF PROTO=TCP SPT=36892 DPT=9100 SEQ=1220773812 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C1E1A0000000001030307) 
Nov 23 04:07:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13422 DF PROTO=TCP SPT=57614 DPT=9105 SEQ=2552453762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C26590000000001030307) 
Nov 23 04:07:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13423 DF PROTO=TCP SPT=57614 DPT=9105 SEQ=2552453762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C36190000000001030307) 
Nov 23 04:07:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47189 DF PROTO=TCP SPT=36892 DPT=9100 SEQ=1220773812 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C4EDA0000000001030307) 
Nov 23 04:07:36 localhost sshd[116909]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:07:36 localhost systemd-logind[761]: New session 38 of user zuul.
Nov 23 04:07:36 localhost systemd[1]: Started Session 38 of User zuul.
Nov 23 04:07:37 localhost python3.9[117002]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 23 04:07:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13424 DF PROTO=TCP SPT=57614 DPT=9105 SEQ=2552453762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C56DA0000000001030307) 
Nov 23 04:07:38 localhost python3.9[117106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:07:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63825 DF PROTO=TCP SPT=43088 DPT=9102 SEQ=2685167353 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C5ADA0000000001030307) 
Nov 23 04:07:39 localhost python3.9[117198]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:40 localhost python3.9[117291]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:07:40 localhost python3.9[117383]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:07:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19544 DF PROTO=TCP SPT=50262 DPT=9882 SEQ=1651126800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C648C0000000001030307) 
Nov 23 04:07:41 localhost python3.9[117475]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:07:42 localhost python3.9[117548]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763888861.1630635-179-198023529034858/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:07:43 localhost python3.9[117640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16150 DF PROTO=TCP SPT=58252 DPT=9101 SEQ=567735361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C6CD90000000001030307) 
Nov 23 04:07:44 localhost python3.9[117736]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:07:44 localhost python3.9[117828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:07:45 localhost python3.9[117918]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:07:45 localhost network[117935]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:07:45 localhost network[117936]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:07:45 localhost network[117937]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:07:46 localhost sshd[117954]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:07:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:07:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16151 DF PROTO=TCP SPT=58252 DPT=9101 SEQ=567735361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C7C990000000001030307) 
Nov 23 04:07:49 localhost python3.9[118137]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:07:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57714 DF PROTO=TCP SPT=32786 DPT=9100 SEQ=2005537401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C87270000000001030307) 
Nov 23 04:07:50 localhost python3.9[118227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:07:51 localhost python3.9[118323]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:07:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57716 DF PROTO=TCP SPT=32786 DPT=9100 SEQ=2005537401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C93190000000001030307) 
Nov 23 04:07:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13167 DF PROTO=TCP SPT=40332 DPT=9105 SEQ=176095575 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757C9B990000000001030307) 
Nov 23 04:07:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13168 DF PROTO=TCP SPT=40332 DPT=9105 SEQ=176095575 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CAB590000000001030307) 
Nov 23 04:08:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57718 DF PROTO=TCP SPT=32786 DPT=9100 SEQ=2005537401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CC2DA0000000001030307) 
Nov 23 04:08:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13169 DF PROTO=TCP SPT=40332 DPT=9105 SEQ=176095575 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CCAD90000000001030307) 
Nov 23 04:08:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34638 DF PROTO=TCP SPT=45924 DPT=9102 SEQ=3528416475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CCED90000000001030307) 
Nov 23 04:08:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10345 DF PROTO=TCP SPT=39852 DPT=9882 SEQ=260134374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CD9BC0000000001030307) 
Nov 23 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5206 DF PROTO=TCP SPT=37972 DPT=9101 SEQ=93288276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CE2190000000001030307) 
Nov 23 04:08:15 localhost systemd[1]: Stopping OpenSSH server daemon...
Nov 23 04:08:15 localhost systemd[1]: sshd.service: Deactivated successfully.
Nov 23 04:08:15 localhost systemd[1]: Stopped OpenSSH server daemon.
Nov 23 04:08:15 localhost systemd[1]: sshd.service: Consumed 3.573s CPU time.
Nov 23 04:08:15 localhost systemd[1]: Stopped target sshd-keygen.target.
Nov 23 04:08:15 localhost systemd[1]: Stopping sshd-keygen.target...
Nov 23 04:08:15 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:15 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:15 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:15 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 04:08:15 localhost systemd[1]: Starting OpenSSH server daemon...
Nov 23 04:08:15 localhost sshd[118443]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:08:15 localhost systemd[1]: Started OpenSSH server daemon.
Nov 23 04:08:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 04:08:16 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 04:08:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 04:08:16 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 04:08:16 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 04:08:16 localhost systemd[1]: run-rf407f45b3537434dbf6eaaf567aaa6ee.service: Deactivated successfully.
Nov 23 04:08:16 localhost systemd[1]: run-r05393e7a20404742812e442f15dccc6e.service: Deactivated successfully.
Nov 23 04:08:17 localhost systemd[1]: Stopping OpenSSH server daemon...
Nov 23 04:08:17 localhost systemd[1]: sshd.service: Deactivated successfully.
Nov 23 04:08:17 localhost systemd[1]: Stopped OpenSSH server daemon.
Nov 23 04:08:17 localhost systemd[1]: Stopped target sshd-keygen.target.
Nov 23 04:08:17 localhost systemd[1]: Stopping sshd-keygen.target...
Nov 23 04:08:17 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:17 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:17 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:08:17 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 04:08:17 localhost systemd[1]: Starting OpenSSH server daemon...
Nov 23 04:08:17 localhost sshd[118615]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:08:17 localhost systemd[1]: Started OpenSSH server daemon.
Nov 23 04:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5207 DF PROTO=TCP SPT=37972 DPT=9101 SEQ=93288276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CF1D90000000001030307) 
Nov 23 04:08:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31171 DF PROTO=TCP SPT=52508 DPT=9100 SEQ=4170994380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757CFC570000000001030307) 
Nov 23 04:08:21 localhost sshd[118620]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:08:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31173 DF PROTO=TCP SPT=52508 DPT=9100 SEQ=4170994380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D08590000000001030307) 
Nov 23 04:08:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2719 DF PROTO=TCP SPT=51348 DPT=9105 SEQ=475088836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D10D90000000001030307) 
Nov 23 04:08:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2720 DF PROTO=TCP SPT=51348 DPT=9105 SEQ=475088836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D20990000000001030307) 
Nov 23 04:08:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31175 DF PROTO=TCP SPT=52508 DPT=9100 SEQ=4170994380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D38D90000000001030307) 
Nov 23 04:08:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2721 DF PROTO=TCP SPT=51348 DPT=9105 SEQ=475088836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D40DA0000000001030307) 
Nov 23 04:08:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17544 DF PROTO=TCP SPT=43464 DPT=9102 SEQ=1809359236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D44D90000000001030307) 
Nov 23 04:08:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57462 DF PROTO=TCP SPT=34074 DPT=9882 SEQ=4058717555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D4EEC0000000001030307) 
Nov 23 04:08:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4352 DF PROTO=TCP SPT=51908 DPT=9101 SEQ=216272859 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D57590000000001030307) 
Nov 23 04:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4353 DF PROTO=TCP SPT=51908 DPT=9101 SEQ=216272859 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D67190000000001030307) 
Nov 23 04:08:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44702 DF PROTO=TCP SPT=43358 DPT=9100 SEQ=1860499556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D71870000000001030307) 
Nov 23 04:08:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44704 DF PROTO=TCP SPT=43358 DPT=9100 SEQ=1860499556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D7D9A0000000001030307) 
Nov 23 04:08:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41722 DF PROTO=TCP SPT=45018 DPT=9105 SEQ=3950818731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D85D90000000001030307) 
Nov 23 04:08:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41723 DF PROTO=TCP SPT=45018 DPT=9105 SEQ=3950818731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757D95990000000001030307) 
Nov 23 04:09:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44706 DF PROTO=TCP SPT=43358 DPT=9100 SEQ=1860499556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DACD90000000001030307) 
Nov 23 04:09:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41724 DF PROTO=TCP SPT=45018 DPT=9105 SEQ=3950818731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DB6D90000000001030307) 
Nov 23 04:09:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42541 DF PROTO=TCP SPT=53818 DPT=9102 SEQ=6570189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DBAD90000000001030307) 
Nov 23 04:09:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21991 DF PROTO=TCP SPT=50048 DPT=9882 SEQ=1745759186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DC41D0000000001030307) 
Nov 23 04:09:13 localhost sshd[119144]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62793 DF PROTO=TCP SPT=47756 DPT=9101 SEQ=1952826255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DCC590000000001030307) 
Nov 23 04:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62794 DF PROTO=TCP SPT=47756 DPT=9101 SEQ=1952826255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DDC1A0000000001030307) 
Nov 23 04:09:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40386 DF PROTO=TCP SPT=51140 DPT=9100 SEQ=2453811647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DE6B70000000001030307) 
Nov 23 04:09:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40388 DF PROTO=TCP SPT=51140 DPT=9100 SEQ=2453811647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DF2D90000000001030307) 
Nov 23 04:09:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41698 DF PROTO=TCP SPT=51632 DPT=9105 SEQ=3436181543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757DFB190000000001030307) 
Nov 23 04:09:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41699 DF PROTO=TCP SPT=51632 DPT=9105 SEQ=3436181543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E0ADA0000000001030307) 
Nov 23 04:09:31 localhost kernel: SELinux:  Converting 2741 SID table entries...
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:09:31 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:09:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40390 DF PROTO=TCP SPT=51140 DPT=9100 SEQ=2453811647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E22D90000000001030307) 
Nov 23 04:09:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41700 DF PROTO=TCP SPT=51632 DPT=9105 SEQ=3436181543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E2ADA0000000001030307) 
Nov 23 04:09:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12272 DF PROTO=TCP SPT=58102 DPT=9102 SEQ=3172678000 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E2ED90000000001030307) 
Nov 23 04:09:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10787 DF PROTO=TCP SPT=37428 DPT=9882 SEQ=3945046931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E394C0000000001030307) 
Nov 23 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5902 DF PROTO=TCP SPT=60702 DPT=9101 SEQ=3535965848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E41990000000001030307) 
Nov 23 04:09:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5903 DF PROTO=TCP SPT=60702 DPT=9101 SEQ=3535965848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E515A0000000001030307) 
Nov 23 04:09:47 localhost sshd[119301]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:09:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43908 DF PROTO=TCP SPT=43462 DPT=9100 SEQ=1163022103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E5BE60000000001030307) 
Nov 23 04:09:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43910 DF PROTO=TCP SPT=43462 DPT=9100 SEQ=1163022103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E67D90000000001030307) 
Nov 23 04:09:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33549 DF PROTO=TCP SPT=46198 DPT=9105 SEQ=2769961991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E70590000000001030307) 
Nov 23 04:09:57 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Nov 23 04:09:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33550 DF PROTO=TCP SPT=46198 DPT=9105 SEQ=2769961991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E80190000000001030307) 
Nov 23 04:10:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43912 DF PROTO=TCP SPT=43462 DPT=9100 SEQ=1163022103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757E98D90000000001030307) 
Nov 23 04:10:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33551 DF PROTO=TCP SPT=46198 DPT=9105 SEQ=2769961991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EA0D90000000001030307) 
Nov 23 04:10:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28935 DF PROTO=TCP SPT=51882 DPT=9102 SEQ=3725415445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EA4D90000000001030307) 
Nov 23 04:10:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31388 DF PROTO=TCP SPT=57588 DPT=9882 SEQ=176052040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EAE7D0000000001030307) 
Nov 23 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13436 DF PROTO=TCP SPT=58854 DPT=9101 SEQ=1089196715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EB6D90000000001030307) 
Nov 23 04:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13437 DF PROTO=TCP SPT=58854 DPT=9101 SEQ=1089196715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EC6990000000001030307) 
Nov 23 04:10:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6295 DF PROTO=TCP SPT=44374 DPT=9100 SEQ=288558371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757ED1170000000001030307) 
Nov 23 04:10:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6297 DF PROTO=TCP SPT=44374 DPT=9100 SEQ=288558371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EDD1A0000000001030307) 
Nov 23 04:10:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41832 DF PROTO=TCP SPT=57470 DPT=9105 SEQ=1451268786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EE59A0000000001030307) 
Nov 23 04:10:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41833 DF PROTO=TCP SPT=57470 DPT=9105 SEQ=1451268786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757EF5590000000001030307) 
Nov 23 04:10:30 localhost python3.9[119457]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:10:31 localhost python3.9[119549]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:10:31 localhost python3.9[119622]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889030.6003394-428-241642443307032/.source.fact _original_basename=.xjotvsny follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:10:32 localhost python3.9[119712]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:10:34 localhost python3.9[119810]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:10:35 localhost python3.9[119864]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:10:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6299 DF PROTO=TCP SPT=44374 DPT=9100 SEQ=288558371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F0CDA0000000001030307) 
Nov 23 04:10:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41834 DF PROTO=TCP SPT=57470 DPT=9105 SEQ=1451268786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F14D90000000001030307) 
Nov 23 04:10:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50367 DF PROTO=TCP SPT=36882 DPT=9102 SEQ=1489420902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F18D90000000001030307) 
Nov 23 04:10:38 localhost systemd[1]: Reloading.
Nov 23 04:10:38 localhost systemd-rc-local-generator[119899]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:10:38 localhost systemd-sysv-generator[119904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:10:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:10:38 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 04:10:40 localhost python3.9[120004]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:10:40 localhost sshd[120011]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:10:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35903 DF PROTO=TCP SPT=46548 DPT=9882 SEQ=555504203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F23AD0000000001030307) 
Nov 23 04:10:42 localhost python3.9[120245]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 23 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33301 DF PROTO=TCP SPT=60604 DPT=9101 SEQ=3311101941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F2C190000000001030307) 
Nov 23 04:10:43 localhost python3.9[120337]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 23 04:10:45 localhost python3.9[120430]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:10:46 localhost python3.9[120522]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 23 04:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33302 DF PROTO=TCP SPT=60604 DPT=9101 SEQ=3311101941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F3BDA0000000001030307) 
Nov 23 04:10:48 localhost python3.9[120614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:10:48 localhost python3.9[120706]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:10:49 localhost python3.9[120779]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889048.4585772-753-128173975569115/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:10:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51517 DF PROTO=TCP SPT=60670 DPT=9100 SEQ=3554371293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F46460000000001030307) 
Nov 23 04:10:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51519 DF PROTO=TCP SPT=60670 DPT=9100 SEQ=3554371293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F52590000000001030307) 
Nov 23 04:10:55 localhost python3.9[120871]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:10:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13206 DF PROTO=TCP SPT=55202 DPT=9105 SEQ=3558923492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F5A990000000001030307) 
Nov 23 04:10:56 localhost python3.9[120965]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 23 04:10:57 localhost python3.9[121058]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 23 04:10:58 localhost python3.9[121151]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 04:10:59 localhost python3.9[121249]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 23 04:10:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13207 DF PROTO=TCP SPT=55202 DPT=9105 SEQ=3558923492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F6A5A0000000001030307) 
Nov 23 04:11:00 localhost python3.9[121395]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:11:04 localhost python3.9[121546]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51521 DF PROTO=TCP SPT=60670 DPT=9100 SEQ=3554371293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F82D90000000001030307) 
Nov 23 04:11:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13208 DF PROTO=TCP SPT=55202 DPT=9105 SEQ=3558923492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F8ADA0000000001030307) 
Nov 23 04:11:08 localhost python3.9[121653]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:11:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7050 DF PROTO=TCP SPT=58936 DPT=9102 SEQ=2945540681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F8ED90000000001030307) 
Nov 23 04:11:09 localhost python3.9[121726]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889068.1131659-1026-197268613725925/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:10 localhost python3.9[121818]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:11:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 04:11:10 localhost systemd[1]: Stopped Load Kernel Modules.
Nov 23 04:11:10 localhost systemd[1]: Stopping Load Kernel Modules...
Nov 23 04:11:10 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 04:11:10 localhost systemd-modules-load[121822]: Module 'msr' is built in
Nov 23 04:11:10 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 04:11:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19660 DF PROTO=TCP SPT=52472 DPT=9882 SEQ=3939019005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757F98DD0000000001030307) 
Nov 23 04:11:11 localhost python3.9[121915]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:11:12 localhost python3.9[121988]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889071.1994867-1095-151214542911006/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:12 localhost sshd[122014]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:11:13 localhost python3.9[122082]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:11:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34108 DF PROTO=TCP SPT=37064 DPT=9101 SEQ=2688232869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FA11A0000000001030307) 
Nov 23 04:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34109 DF PROTO=TCP SPT=37064 DPT=9101 SEQ=2688232869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FB0DA0000000001030307) 
Nov 23 04:11:17 localhost python3.9[122174]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:11:18 localhost python3.9[122266]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 23 04:11:19 localhost python3.9[122356]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:11:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24949 DF PROTO=TCP SPT=55512 DPT=9100 SEQ=1400698579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FBB760000000001030307) 
Nov 23 04:11:20 localhost python3.9[122448]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:11:20 localhost systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 23 04:11:20 localhost systemd[1]: tuned.service: Deactivated successfully.
Nov 23 04:11:20 localhost systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 23 04:11:20 localhost systemd[1]: tuned.service: Consumed 1.968s CPU time, no IO.
Nov 23 04:11:20 localhost systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 23 04:11:21 localhost systemd[1]: Started Dynamic System Tuning Daemon.
Nov 23 04:11:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24951 DF PROTO=TCP SPT=55512 DPT=9100 SEQ=1400698579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FC7990000000001030307) 
Nov 23 04:11:23 localhost python3.9[122550]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 23 04:11:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41848 DF PROTO=TCP SPT=53914 DPT=9105 SEQ=770125436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FCFDA0000000001030307) 
Nov 23 04:11:27 localhost python3.9[122642]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:11:28 localhost systemd[1]: Reloading.
Nov 23 04:11:28 localhost systemd-rc-local-generator[122672]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:11:28 localhost systemd-sysv-generator[122675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:11:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:11:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41849 DF PROTO=TCP SPT=53914 DPT=9105 SEQ=770125436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FDF9A0000000001030307) 
Nov 23 04:11:29 localhost python3.9[122772]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:11:30 localhost systemd[1]: Reloading.
Nov 23 04:11:31 localhost systemd-rc-local-generator[122802]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:11:31 localhost systemd-sysv-generator[122806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:11:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:11:32 localhost python3.9[122903]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:32 localhost python3.9[122996]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:32 localhost kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Nov 23 04:11:33 localhost python3.9[123089]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24953 DF PROTO=TCP SPT=55512 DPT=9100 SEQ=1400698579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A757FF6DA0000000001030307) 
Nov 23 04:11:35 localhost python3.9[123188]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:36 localhost python3.9[123281]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:11:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 23 04:11:36 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 23 04:11:36 localhost systemd[1]: Stopping Apply Kernel Variables...
Nov 23 04:11:36 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 23 04:11:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 23 04:11:36 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 23 04:11:36 localhost systemd[1]: session-38.scope: Deactivated successfully.
Nov 23 04:11:36 localhost systemd[1]: session-38.scope: Consumed 2min 2.081s CPU time.
Nov 23 04:11:36 localhost systemd-logind[761]: Session 38 logged out. Waiting for processes to exit.
Nov 23 04:11:36 localhost systemd-logind[761]: Removed session 38.
Nov 23 04:11:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41850 DF PROTO=TCP SPT=53914 DPT=9105 SEQ=770125436 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758000DA0000000001030307) 
Nov 23 04:11:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59132 DF PROTO=TCP SPT=47560 DPT=9102 SEQ=1462130427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758002DA0000000001030307) 
Nov 23 04:11:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15246 DF PROTO=TCP SPT=35420 DPT=9882 SEQ=183931100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75800E0C0000000001030307) 
Nov 23 04:11:42 localhost sshd[123301]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:11:42 localhost systemd-logind[761]: New session 39 of user zuul.
Nov 23 04:11:42 localhost systemd[1]: Started Session 39 of User zuul.
Nov 23 04:11:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31894 DF PROTO=TCP SPT=43564 DPT=9101 SEQ=2673615762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758016590000000001030307) 
Nov 23 04:11:43 localhost python3.9[123394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:11:44 localhost sshd[123489]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:11:44 localhost python3.9[123488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:11:46 localhost python3.9[123586]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:47 localhost python3.9[123677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31895 DF PROTO=TCP SPT=43564 DPT=9101 SEQ=2673615762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758026190000000001030307) 
Nov 23 04:11:48 localhost python3.9[123773]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:11:49 localhost python3.9[123827]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:11:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22857 DF PROTO=TCP SPT=53398 DPT=9100 SEQ=3211276952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758030A60000000001030307) 
Nov 23 04:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22859 DF PROTO=TCP SPT=53398 DPT=9100 SEQ=3211276952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75803C990000000001030307) 
Nov 23 04:11:53 localhost python3.9[123921]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:11:54 localhost python3.9[124068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:11:55 localhost python3.9[124160]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:11:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36950 DF PROTO=TCP SPT=38152 DPT=9105 SEQ=2936313808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758045190000000001030307) 
Nov 23 04:11:56 localhost python3.9[124265]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:11:56 localhost python3.9[124313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:11:57 localhost python3.9[124405]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:11:57 localhost python3.9[124478]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889116.8082304-325-254548809145723/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:58 localhost python3.9[124570]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:59 localhost python3.9[124662]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:11:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36951 DF PROTO=TCP SPT=38152 DPT=9105 SEQ=2936313808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758054D90000000001030307) 
Nov 23 04:11:59 localhost python3.9[124754]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:12:00 localhost python3.9[124846]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:12:01 localhost python3.9[124936]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:12:02 localhost python3.9[125030]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22861 DF PROTO=TCP SPT=53398 DPT=9100 SEQ=3211276952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75806CD90000000001030307) 
Nov 23 04:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:12:06 localhost python3.9[125154]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36952 DF PROTO=TCP SPT=38152 DPT=9105 SEQ=2936313808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758074D90000000001030307) 
Nov 23 04:12:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19846 DF PROTO=TCP SPT=52858 DPT=9102 SEQ=2668768707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758078D90000000001030307) 
Nov 23 04:12:09 localhost sshd[125188]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:12:10 localhost python3.9[125296]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21793 DF PROTO=TCP SPT=54320 DPT=9882 SEQ=3879186899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580833C0000000001030307) 
Nov 23 04:12:12 localhost sshd[125299]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:12:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24042 DF PROTO=TCP SPT=44164 DPT=9101 SEQ=1553816087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75808B990000000001030307) 
Nov 23 04:12:14 localhost python3.9[125398]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24043 DF PROTO=TCP SPT=44164 DPT=9101 SEQ=1553816087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75809B590000000001030307) 
Nov 23 04:12:19 localhost python3.9[125492]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28799 DF PROTO=TCP SPT=40122 DPT=9100 SEQ=4154702638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580A5D60000000001030307) 
Nov 23 04:12:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28801 DF PROTO=TCP SPT=40122 DPT=9100 SEQ=4154702638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580B1D90000000001030307) 
Nov 23 04:12:23 localhost python3.9[125586]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9066 DF PROTO=TCP SPT=52642 DPT=9105 SEQ=2784545090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580BA590000000001030307) 
Nov 23 04:12:27 localhost python3.9[125680]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:12:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9067 DF PROTO=TCP SPT=52642 DPT=9105 SEQ=2784545090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580CA190000000001030307) 
Nov 23 04:12:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28803 DF PROTO=TCP SPT=40122 DPT=9100 SEQ=4154702638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580E2DA0000000001030307) 
Nov 23 04:12:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9068 DF PROTO=TCP SPT=52642 DPT=9105 SEQ=2784545090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580EAD90000000001030307) 
Nov 23 04:12:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42628 DF PROTO=TCP SPT=56178 DPT=9102 SEQ=157270468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580EED90000000001030307) 
Nov 23 04:12:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53055 DF PROTO=TCP SPT=35444 DPT=9882 SEQ=1068222070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7580F86C0000000001030307) 
Nov 23 04:12:41 localhost sshd[125774]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1120 DF PROTO=TCP SPT=50382 DPT=9101 SEQ=4011331816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758100D90000000001030307) 
Nov 23 04:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1121 DF PROTO=TCP SPT=50382 DPT=9101 SEQ=4011331816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758110990000000001030307) 
Nov 23 04:12:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55848 DF PROTO=TCP SPT=53962 DPT=9100 SEQ=925342823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75811B060000000001030307) 
Nov 23 04:12:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55850 DF PROTO=TCP SPT=53962 DPT=9100 SEQ=925342823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758127190000000001030307) 
Nov 23 04:12:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13933 DF PROTO=TCP SPT=48412 DPT=9105 SEQ=932833285 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75812F590000000001030307) 
Nov 23 04:12:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13934 DF PROTO=TCP SPT=48412 DPT=9105 SEQ=932833285 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75813F190000000001030307) 
Nov 23 04:13:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55852 DF PROTO=TCP SPT=53962 DPT=9100 SEQ=925342823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758156D90000000001030307) 
Nov 23 04:13:07 localhost sshd[125776]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:13:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13935 DF PROTO=TCP SPT=48412 DPT=9105 SEQ=932833285 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75815ED90000000001030307) 
Nov 23 04:13:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31693 DF PROTO=TCP SPT=51466 DPT=9102 SEQ=2241717868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758162D90000000001030307) 
Nov 23 04:13:10 localhost python3.9[125872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:13:10 localhost python3.9[126021]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:13:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40987 DF PROTO=TCP SPT=56918 DPT=9882 SEQ=2697702391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75816D9D0000000001030307) 
Nov 23 04:13:11 localhost python3.9[126138]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763889190.2233932-724-237150721638356/.source.json _original_basename=.y_qn77ym follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:13:11 localhost podman[126165]: 
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.428788431 +0000 UTC m=+0.069664977 container create 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=)
Nov 23 04:13:11 localhost systemd[1]: Started libpod-conmon-6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698.scope.
Nov 23 04:13:11 localhost systemd[1]: Started libcrun container.
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.407331753 +0000 UTC m=+0.048208349 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.507033898 +0000 UTC m=+0.147910474 container init 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, release=553, architecture=x86_64, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7)
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.519413541 +0000 UTC m=+0.160290117 container start 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, io.openshift.tags=rhceph ceph, version=7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main)
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.51974749 +0000 UTC m=+0.160624066 container attach 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main)
Nov 23 04:13:11 localhost systemd[1]: libpod-6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698.scope: Deactivated successfully.
Nov 23 04:13:11 localhost objective_hertz[126194]: 167 167
Nov 23 04:13:11 localhost podman[126165]: 2025-11-23 09:13:11.525943848 +0000 UTC m=+0.166820454 container died 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, version=7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main)
Nov 23 04:13:11 localhost podman[126199]: 2025-11-23 09:13:11.636051803 +0000 UTC m=+0.093859559 container remove 6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hertz, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, name=rhceph, build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:13:11 localhost systemd[1]: libpod-conmon-6993babafa89f85decbca5691afee3a59156d306b8a2416159983cffe427a698.scope: Deactivated successfully.
Nov 23 04:13:11 localhost podman[126223]: 
Nov 23 04:13:11 localhost podman[126223]: 2025-11-23 09:13:11.874412172 +0000 UTC m=+0.075914915 container create c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:13:11 localhost systemd[1]: Started libpod-conmon-c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8.scope.
Nov 23 04:13:11 localhost systemd[1]: Started libcrun container.
Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3358549e6bfbd471f0e7732a6dfd97daa1568ffb916cd10d7f6f01da02df5799/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 04:13:11 localhost podman[126223]: 2025-11-23 09:13:11.848876105 +0000 UTC m=+0.050378828 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3358549e6bfbd471f0e7732a6dfd97daa1568ffb916cd10d7f6f01da02df5799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3358549e6bfbd471f0e7732a6dfd97daa1568ffb916cd10d7f6f01da02df5799/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:13:11 localhost podman[126223]: 2025-11-23 09:13:11.955001413 +0000 UTC m=+0.156504166 container init c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:13:11 localhost podman[126223]: 2025-11-23 09:13:11.966692518 +0000 UTC m=+0.168195261 container start c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, RELEASE=main, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container)
Nov 23 04:13:11 localhost podman[126223]: 2025-11-23 09:13:11.967107829 +0000 UTC m=+0.168610622 container attach c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:13:12 localhost systemd[1]: tmp-crun.eCJULs.mount: Deactivated successfully.
Nov 23 04:13:12 localhost systemd[1]: var-lib-containers-storage-overlay-3f9ba122c437f2dbe48c5a5206d40cd177162f3c857789e38eac80ee35fcc8c2-merged.mount: Deactivated successfully.
Nov 23 04:13:12 localhost suspicious_tu[126239]: [
Nov 23 04:13:12 localhost suspicious_tu[126239]:    {
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "available": false,
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "ceph_device": false,
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "lsm_data": {},
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "lvs": [],
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "path": "/dev/sr0",
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "rejected_reasons": [
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "Has a FileSystem",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "Insufficient space (<5GB)"
Nov 23 04:13:12 localhost suspicious_tu[126239]:        ],
Nov 23 04:13:12 localhost suspicious_tu[126239]:        "sys_api": {
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "actuators": null,
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "device_nodes": "sr0",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "human_readable_size": "482.00 KB",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "id_bus": "ata",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "model": "QEMU DVD-ROM",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "nr_requests": "2",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "partitions": {},
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "path": "/dev/sr0",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "removable": "1",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "rev": "2.5+",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "ro": "0",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "rotational": "1",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "sas_address": "",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "sas_device_handle": "",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "scheduler_mode": "mq-deadline",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "sectors": 0,
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "sectorsize": "2048",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "size": 493568.0,
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "support_discard": "0",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "type": "disk",
Nov 23 04:13:12 localhost suspicious_tu[126239]:            "vendor": "QEMU"
Nov 23 04:13:12 localhost suspicious_tu[126239]:        }
Nov 23 04:13:12 localhost suspicious_tu[126239]:    }
Nov 23 04:13:12 localhost suspicious_tu[126239]: ]
Nov 23 04:13:12 localhost systemd[1]: libpod-c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8.scope: Deactivated successfully.
Nov 23 04:13:12 localhost systemd[1]: libpod-c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8.scope: Consumed 1.061s CPU time.
Nov 23 04:13:12 localhost podman[126223]: 2025-11-23 09:13:12.994078367 +0000 UTC m=+1.195581160 container died c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, description=Red Hat Ceph Storage 7)
Nov 23 04:13:13 localhost systemd[1]: tmp-crun.jGTHyL.mount: Deactivated successfully.
Nov 23 04:13:13 localhost systemd[1]: var-lib-containers-storage-overlay-3358549e6bfbd471f0e7732a6dfd97daa1568ffb916cd10d7f6f01da02df5799-merged.mount: Deactivated successfully.
Nov 23 04:13:13 localhost podman[127793]: 2025-11-23 09:13:13.081601644 +0000 UTC m=+0.079081920 container remove c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_tu, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True)
Nov 23 04:13:13 localhost systemd[1]: libpod-conmon-c79a5c2ce26e15707de76accbcdf1312907aece2cd485c27d5566a34f3550be8.scope: Deactivated successfully.
Nov 23 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16540 DF PROTO=TCP SPT=50064 DPT=9101 SEQ=2530808169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758175D90000000001030307) 
Nov 23 04:13:13 localhost python3.9[127853]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:13 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation.
Nov 23 04:13:13 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:13:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:13:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16541 DF PROTO=TCP SPT=50064 DPT=9101 SEQ=2530808169 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758185990000000001030307) 
Nov 23 04:13:19 localhost podman[127866]: 2025-11-23 09:13:13.607439196 +0000 UTC m=+0.049357621 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 23 04:13:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3015 DF PROTO=TCP SPT=58088 DPT=9100 SEQ=2647293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758190370000000001030307) 
Nov 23 04:13:21 localhost python3.9[128080]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3017 DF PROTO=TCP SPT=58088 DPT=9100 SEQ=2647293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75819C590000000001030307) 
Nov 23 04:13:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55813 DF PROTO=TCP SPT=49420 DPT=9105 SEQ=2156614760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581A49A0000000001030307) 
Nov 23 04:13:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55814 DF PROTO=TCP SPT=49420 DPT=9105 SEQ=2156614760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581B4590000000001030307) 
Nov 23 04:13:31 localhost podman[128092]: 2025-11-23 09:13:21.165914828 +0000 UTC m=+0.038989181 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 04:13:33 localhost python3.9[128388]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:34 localhost podman[128400]: 2025-11-23 09:13:33.154821859 +0000 UTC m=+0.047367146 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Nov 23 04:13:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3019 DF PROTO=TCP SPT=58088 DPT=9100 SEQ=2647293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581CCD90000000001030307) 
Nov 23 04:13:35 localhost python3.9[128566]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:36 localhost podman[128579]: 2025-11-23 09:13:35.974853217 +0000 UTC m=+0.054751345 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:13:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55815 DF PROTO=TCP SPT=49420 DPT=9105 SEQ=2156614760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581D4D90000000001030307) 
Nov 23 04:13:38 localhost python3.9[128744]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36259 DF PROTO=TCP SPT=39678 DPT=9102 SEQ=3358148642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581D8D90000000001030307) 
Nov 23 04:13:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5465 DF PROTO=TCP SPT=35622 DPT=9882 SEQ=2014746612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581E2CC0000000001030307) 
Nov 23 04:13:41 localhost podman[128758]: 2025-11-23 09:13:38.403976118 +0000 UTC m=+0.050612634 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Nov 23 04:13:42 localhost python3.9[128935]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 23 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25368 DF PROTO=TCP SPT=45742 DPT=9101 SEQ=2302095804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581EB190000000001030307) 
Nov 23 04:13:44 localhost sshd[128986]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:13:44 localhost podman[128948]: 2025-11-23 09:13:42.989172315 +0000 UTC m=+0.048598490 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Nov 23 04:13:45 localhost systemd[1]: session-39.scope: Deactivated successfully.
Nov 23 04:13:45 localhost systemd[1]: session-39.scope: Consumed 1min 37.170s CPU time.
Nov 23 04:13:45 localhost systemd-logind[761]: Session 39 logged out. Waiting for processes to exit.
Nov 23 04:13:45 localhost systemd-logind[761]: Removed session 39.
Nov 23 04:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25369 DF PROTO=TCP SPT=45742 DPT=9101 SEQ=2302095804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7581FAD90000000001030307) 
Nov 23 04:13:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15462 DF PROTO=TCP SPT=37544 DPT=9100 SEQ=3174468333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758205670000000001030307) 
Nov 23 04:13:51 localhost sshd[129060]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:13:51 localhost systemd-logind[761]: New session 40 of user zuul.
Nov 23 04:13:51 localhost systemd[1]: Started Session 40 of User zuul.
Nov 23 04:13:52 localhost python3.9[129153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:13:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15464 DF PROTO=TCP SPT=37544 DPT=9100 SEQ=3174468333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758211590000000001030307) 
Nov 23 04:13:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20319 DF PROTO=TCP SPT=59388 DPT=9105 SEQ=1868842229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758219D90000000001030307) 
Nov 23 04:13:56 localhost python3.9[129249]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 23 04:13:57 localhost python3.9[129342]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:13:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20320 DF PROTO=TCP SPT=59388 DPT=9105 SEQ=1868842229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758229990000000001030307) 
Nov 23 04:13:59 localhost python3.9[129548]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:14:03 localhost python3.9[129749]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:14:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15466 DF PROTO=TCP SPT=37544 DPT=9100 SEQ=3174468333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758240D90000000001030307) 
Nov 23 04:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20321 DF PROTO=TCP SPT=59388 DPT=9105 SEQ=1868842229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75824AD90000000001030307) 
Nov 23 04:14:07 localhost python3.9[129843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:14:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58420 DF PROTO=TCP SPT=46954 DPT=9102 SEQ=3746378370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75824CDA0000000001030307) 
Nov 23 04:14:09 localhost sshd[129916]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:14:09 localhost python3.9[129938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:14:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32106 DF PROTO=TCP SPT=57624 DPT=9882 SEQ=505488617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758257FC0000000001030307) 
Nov 23 04:14:11 localhost python3.9[130030]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 23 04:14:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8076 DF PROTO=TCP SPT=41986 DPT=9101 SEQ=3033830663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758260590000000001030307) 
Nov 23 04:14:13 localhost kernel: SELinux:  Converting 2743 SID table entries...
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:14:13 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:14:13 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Nov 23 04:14:14 localhost python3.9[130469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:14:15 localhost python3.9[130579]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8077 DF PROTO=TCP SPT=41986 DPT=9101 SEQ=3033830663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582701A0000000001030307) 
Nov 23 04:14:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5465 DF PROTO=TCP SPT=50976 DPT=9100 SEQ=3884328405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75827A970000000001030307) 
Nov 23 04:14:20 localhost python3.9[130688]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:14:22 localhost python3.9[130933]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 04:14:22 localhost python3.9[131023]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5467 DF PROTO=TCP SPT=50976 DPT=9100 SEQ=3884328405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582869A0000000001030307) 
Nov 23 04:14:23 localhost python3.9[131117]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:14:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13938 DF PROTO=TCP SPT=56330 DPT=9105 SEQ=3865064557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75828F190000000001030307) 
Nov 23 04:14:27 localhost python3.9[131211]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:14:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13939 DF PROTO=TCP SPT=56330 DPT=9105 SEQ=3865064557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75829ED90000000001030307) 
Nov 23 04:14:31 localhost python3.9[131305]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 04:14:32 localhost systemd[1]: Reloading.
Nov 23 04:14:32 localhost systemd-rc-local-generator[131330]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:14:32 localhost systemd-sysv-generator[131335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:14:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:14:33 localhost python3.9[131437]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:14:34 localhost python3.9[131529]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:35 localhost python3.9[131623]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:35 localhost sshd[131644]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:14:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5469 DF PROTO=TCP SPT=50976 DPT=9100 SEQ=3884328405 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582B6D90000000001030307) 
Nov 23 04:14:36 localhost python3.9[131717]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:36 localhost python3.9[131809]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:14:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13940 DF PROTO=TCP SPT=56330 DPT=9105 SEQ=3865064557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582BEDA0000000001030307) 
Nov 23 04:14:37 localhost python3.9[131882]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889276.4968433-565-45077791039067/.source _original_basename=.d6k_f74l follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:38 localhost python3.9[131974]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37730 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3267150487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582C2D90000000001030307) 
Nov 23 04:14:39 localhost python3.9[132066]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 23 04:14:40 localhost python3.9[132158]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:41 localhost python3.9[132250]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:14:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28232 DF PROTO=TCP SPT=40014 DPT=9882 SEQ=3528919115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582CD2D0000000001030307) 
Nov 23 04:14:41 localhost python3.9[132323]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889280.6408923-691-30010664674188/.source.yaml _original_basename=.94fz7o1o follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:42 localhost python3.9[132415]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 23 04:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5990 DF PROTO=TCP SPT=35902 DPT=9101 SEQ=3535559550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582D5990000000001030307) 
Nov 23 04:14:43 localhost ansible-async_wrapper.py[132520]: Invoked with j952811454393 300 /home/zuul/.ansible/tmp/ansible-tmp-1763889282.8937886-763-49254317225241/AnsiballZ_edpm_os_net_config.py _
Nov 23 04:14:43 localhost ansible-async_wrapper.py[132523]: Starting module and watcher
Nov 23 04:14:43 localhost ansible-async_wrapper.py[132523]: Start watching 132524 (300)
Nov 23 04:14:43 localhost ansible-async_wrapper.py[132524]: Start module (132524)
Nov 23 04:14:43 localhost ansible-async_wrapper.py[132520]: Return async_wrapper task started.
Nov 23 04:14:43 localhost python3.9[132525]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False
Nov 23 04:14:44 localhost ansible-async_wrapper.py[132524]: Module complete (132524)
Nov 23 04:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5991 DF PROTO=TCP SPT=35902 DPT=9101 SEQ=3535559550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582E5590000000001030307) 
Nov 23 04:14:47 localhost python3.9[132617]: ansible-ansible.legacy.async_status Invoked with jid=j952811454393.132520 mode=status _async_dir=/root/.ansible_async
Nov 23 04:14:47 localhost python3.9[132676]: ansible-ansible.legacy.async_status Invoked with jid=j952811454393.132520 mode=cleanup _async_dir=/root/.ansible_async
Nov 23 04:14:48 localhost python3.9[132768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:14:48 localhost ansible-async_wrapper.py[132523]: Done in kid B.
Nov 23 04:14:49 localhost python3.9[132841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889288.1907-829-174721815601602/.source.returncode _original_basename=.441pjywp follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:49 localhost python3.9[132933]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:14:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45083 DF PROTO=TCP SPT=33670 DPT=9100 SEQ=4044173444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582EFC60000000001030307) 
Nov 23 04:14:50 localhost python3.9[133006]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889289.465284-877-245041141472523/.source.cfg _original_basename=.n_gk2n9l follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:14:51 localhost python3.9[133098]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:14:51 localhost systemd[1]: Reloading Network Manager...
Nov 23 04:14:51 localhost NetworkManager[5990]: <info>  [1763889291.4493] audit: op="reload" arg="0" pid=133102 uid=0 result="success"
Nov 23 04:14:51 localhost NetworkManager[5990]: <info>  [1763889291.4507] config: signal: SIGHUP (no changes from disk)
Nov 23 04:14:51 localhost systemd[1]: Reloaded Network Manager.
Nov 23 04:14:51 localhost systemd[1]: session-40.scope: Deactivated successfully.
Nov 23 04:14:51 localhost systemd[1]: session-40.scope: Consumed 36.859s CPU time.
Nov 23 04:14:51 localhost systemd-logind[761]: Session 40 logged out. Waiting for processes to exit.
Nov 23 04:14:51 localhost systemd-logind[761]: Removed session 40.
Nov 23 04:14:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45085 DF PROTO=TCP SPT=33670 DPT=9100 SEQ=4044173444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7582FBD90000000001030307) 
Nov 23 04:14:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17208 DF PROTO=TCP SPT=44326 DPT=9105 SEQ=1331552982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758304190000000001030307) 
Nov 23 04:14:57 localhost sshd[133117]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:14:57 localhost systemd-logind[761]: New session 41 of user zuul.
Nov 23 04:14:57 localhost systemd[1]: Started Session 41 of User zuul.
Nov 23 04:14:58 localhost python3.9[133210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:14:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17209 DF PROTO=TCP SPT=44326 DPT=9105 SEQ=1331552982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758313D90000000001030307) 
Nov 23 04:15:00 localhost python3.9[133304]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:15:03 localhost python3.9[133449]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:15:04 localhost systemd-logind[761]: Session 41 logged out. Waiting for processes to exit.
Nov 23 04:15:04 localhost systemd[1]: session-41.scope: Deactivated successfully.
Nov 23 04:15:04 localhost systemd[1]: session-41.scope: Consumed 2.085s CPU time.
Nov 23 04:15:04 localhost systemd-logind[761]: Removed session 41.
Nov 23 04:15:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45087 DF PROTO=TCP SPT=33670 DPT=9100 SEQ=4044173444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75832CD90000000001030307) 
Nov 23 04:15:07 localhost sshd[133465]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17210 DF PROTO=TCP SPT=44326 DPT=9105 SEQ=1331552982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758334D90000000001030307) 
Nov 23 04:15:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18849 DF PROTO=TCP SPT=45488 DPT=9102 SEQ=3671047223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758338D90000000001030307) 
Nov 23 04:15:10 localhost sshd[133467]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:15:10 localhost systemd-logind[761]: New session 42 of user zuul.
Nov 23 04:15:10 localhost systemd[1]: Started Session 42 of User zuul.
Nov 23 04:15:11 localhost python3.9[133560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:15:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65456 DF PROTO=TCP SPT=39434 DPT=9882 SEQ=3392430489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583425C0000000001030307) 
Nov 23 04:15:12 localhost python3.9[133654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4570 DF PROTO=TCP SPT=36238 DPT=9101 SEQ=3947223182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75834A9A0000000001030307) 
Nov 23 04:15:13 localhost sshd[133673]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:15:14 localhost python3.9[133752]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:15:15 localhost python3.9[133806]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4571 DF PROTO=TCP SPT=36238 DPT=9101 SEQ=3947223182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75835A590000000001030307) 
Nov 23 04:15:19 localhost python3.9[134027]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:15:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9600 DF PROTO=TCP SPT=58782 DPT=9100 SEQ=1266607561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758364F60000000001030307) 
Nov 23 04:15:21 localhost python3.9[134174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:15:22 localhost python3.9[134266]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:15:23 localhost python3.9[134371]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9602 DF PROTO=TCP SPT=58782 DPT=9100 SEQ=1266607561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758371190000000001030307) 
Nov 23 04:15:23 localhost python3.9[134419]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:15:23 localhost auditd[727]: Audit daemon rotating log files
Nov 23 04:15:24 localhost python3.9[134511]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:15:24 localhost python3.9[134559]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:15:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19161 DF PROTO=TCP SPT=47186 DPT=9105 SEQ=2166458298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758379590000000001030307) 
Nov 23 04:15:25 localhost python3.9[134651]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:15:26 localhost python3.9[134743]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:15:27 localhost python3.9[134835]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:15:28 localhost python3.9[134927]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:15:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19162 DF PROTO=TCP SPT=47186 DPT=9105 SEQ=2166458298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758389190000000001030307) 
Nov 23 04:15:29 localhost python3.9[135019]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:15:34 localhost python3.9[135113]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:15:35 localhost python3.9[135207]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:15:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9604 DF PROTO=TCP SPT=58782 DPT=9100 SEQ=1266607561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583A0D90000000001030307) 
Nov 23 04:15:36 localhost python3.9[135299]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:15:36 localhost python3.9[135391]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:15:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19163 DF PROTO=TCP SPT=47186 DPT=9105 SEQ=2166458298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583A8DA0000000001030307) 
Nov 23 04:15:38 localhost sshd[135407]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:15:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55536 DF PROTO=TCP SPT=52946 DPT=9102 SEQ=2592587949 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583ACDA0000000001030307) 
Nov 23 04:15:39 localhost python3.9[135486]: ansible-service_facts Invoked
Nov 23 04:15:39 localhost network[135503]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:15:39 localhost network[135504]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:15:39 localhost network[135505]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:15:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:15:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18802 DF PROTO=TCP SPT=46296 DPT=9882 SEQ=3731795840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583B78C0000000001030307) 
Nov 23 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44519 DF PROTO=TCP SPT=50528 DPT=9101 SEQ=2629666195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583BFDA0000000001030307) 
Nov 23 04:15:46 localhost sshd[135784]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:15:46 localhost python3.9[135829]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44520 DF PROTO=TCP SPT=50528 DPT=9101 SEQ=2629666195 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583CF990000000001030307) 
Nov 23 04:15:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1896 DF PROTO=TCP SPT=44306 DPT=9100 SEQ=429312137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583DA270000000001030307) 
Nov 23 04:15:51 localhost python3.9[135923]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 23 04:15:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1898 DF PROTO=TCP SPT=44306 DPT=9100 SEQ=429312137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583E6190000000001030307) 
Nov 23 04:15:54 localhost python3.9[136015]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:15:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21778 DF PROTO=TCP SPT=56222 DPT=9105 SEQ=3653195629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583EE9A0000000001030307) 
Nov 23 04:15:55 localhost python3.9[136090]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889353.3074749-659-205823954535039/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:15:56 localhost python3.9[136184]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:15:57 localhost python3.9[136259]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889355.9158916-704-231450763020644/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:15:58 localhost python3.9[136353]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:15:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21779 DF PROTO=TCP SPT=56222 DPT=9105 SEQ=3653195629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7583FE590000000001030307) 
Nov 23 04:16:00 localhost python3.9[136447]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:16:02 localhost python3.9[136501]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:16:04 localhost python3.9[136595]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:16:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1900 DF PROTO=TCP SPT=44306 DPT=9100 SEQ=429312137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758416D90000000001030307) 
Nov 23 04:16:06 localhost python3.9[136649]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:16:06 localhost chronyd[25884]: chronyd exiting
Nov 23 04:16:06 localhost systemd[1]: Stopping NTP client/server...
Nov 23 04:16:06 localhost systemd[1]: chronyd.service: Deactivated successfully.
Nov 23 04:16:06 localhost systemd[1]: Stopped NTP client/server.
Nov 23 04:16:06 localhost systemd[1]: Starting NTP client/server...
Nov 23 04:16:06 localhost chronyd[136658]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Nov 23 04:16:06 localhost chronyd[136658]: Frequency -26.335 +/- 0.258 ppm read from /var/lib/chrony/drift
Nov 23 04:16:06 localhost chronyd[136658]: Loaded seccomp filter (level 2)
Nov 23 04:16:06 localhost systemd[1]: Started NTP client/server.
Nov 23 04:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21780 DF PROTO=TCP SPT=56222 DPT=9105 SEQ=3653195629 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75841EDA0000000001030307) 
Nov 23 04:16:08 localhost systemd[1]: session-42.scope: Deactivated successfully.
Nov 23 04:16:08 localhost systemd[1]: session-42.scope: Consumed 30.173s CPU time.
Nov 23 04:16:08 localhost systemd-logind[761]: Session 42 logged out. Waiting for processes to exit.
Nov 23 04:16:08 localhost systemd-logind[761]: Removed session 42.
Nov 23 04:16:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1227 DF PROTO=TCP SPT=44000 DPT=9102 SEQ=3077552197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758422D90000000001030307) 
Nov 23 04:16:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15746 DF PROTO=TCP SPT=41642 DPT=9882 SEQ=3368423546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75842CBC0000000001030307) 
Nov 23 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24421 DF PROTO=TCP SPT=32916 DPT=9101 SEQ=2121487875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758435190000000001030307) 
Nov 23 04:16:13 localhost sshd[136674]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:16:14 localhost systemd-logind[761]: New session 43 of user zuul.
Nov 23 04:16:14 localhost systemd[1]: Started Session 43 of User zuul.
Nov 23 04:16:15 localhost python3.9[136767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:16:16 localhost python3.9[136863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24422 DF PROTO=TCP SPT=32916 DPT=9101 SEQ=2121487875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758444D90000000001030307) 
Nov 23 04:16:17 localhost python3.9[136968]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:18 localhost python3.9[137016]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.soo8i4u7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:20 localhost python3.9[137185]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49376 DF PROTO=TCP SPT=51882 DPT=9100 SEQ=1657782759 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75844F570000000001030307) 
Nov 23 04:16:20 localhost python3.9[137260]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889379.5989115-145-97405807642236/.source _original_basename=.2jfzduay follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:22 localhost python3.9[137352]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:16:22 localhost python3.9[137444]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49378 DF PROTO=TCP SPT=51882 DPT=9100 SEQ=1657782759 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75845B590000000001030307) 
Nov 23 04:16:23 localhost python3.9[137517]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889382.4369392-218-30470412854146/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:16:24 localhost python3.9[137609]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:24 localhost python3.9[137682]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889383.8183894-218-44800477207612/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:16:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5510 DF PROTO=TCP SPT=46444 DPT=9105 SEQ=575651890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758463D90000000001030307) 
Nov 23 04:16:25 localhost python3.9[137774]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:26 localhost python3.9[137866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:26 localhost python3.9[137939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889385.5714889-328-208149991562316/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:27 localhost python3.9[138031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:27 localhost python3.9[138104]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889386.8495276-373-171938071845701/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:28 localhost python3.9[138196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:16:28 localhost systemd[1]: Reloading.
Nov 23 04:16:29 localhost systemd-sysv-generator[138224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:16:29 localhost systemd-rc-local-generator[138220]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:16:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:16:29 localhost systemd[1]: Reloading.
Nov 23 04:16:29 localhost systemd-sysv-generator[138265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:16:29 localhost systemd-rc-local-generator[138261]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:16:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5511 DF PROTO=TCP SPT=46444 DPT=9105 SEQ=575651890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758473990000000001030307) 
Nov 23 04:16:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:16:29 localhost systemd[1]: Starting dnf makecache...
Nov 23 04:16:29 localhost systemd[1]: Starting EDPM Container Shutdown...
Nov 23 04:16:29 localhost systemd[1]: Finished EDPM Container Shutdown.
Nov 23 04:16:29 localhost dnf[138272]: Updating Subscription Management repositories.
Nov 23 04:16:31 localhost python3.9[138366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:31 localhost dnf[138272]: Metadata cache refreshed recently.
Nov 23 04:16:31 localhost systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 23 04:16:31 localhost systemd[1]: Finished dnf makecache.
Nov 23 04:16:31 localhost systemd[1]: dnf-makecache.service: Consumed 2.226s CPU time.
Nov 23 04:16:31 localhost python3.9[138439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889390.9425366-442-33044028496113/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:33 localhost python3.9[138531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:34 localhost python3.9[138604]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889393.0159345-487-275760844032313/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49380 DF PROTO=TCP SPT=51882 DPT=9100 SEQ=1657782759 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75848AD90000000001030307) 
Nov 23 04:16:35 localhost python3.9[138696]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:16:35 localhost systemd[1]: Reloading.
Nov 23 04:16:35 localhost systemd-rc-local-generator[138721]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:16:35 localhost systemd-sysv-generator[138726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:16:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:16:35 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:16:35 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:16:35 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:16:35 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:16:36 localhost python3.9[138828]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:16:36 localhost network[138845]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:16:36 localhost network[138846]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:16:36 localhost network[138847]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:16:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5512 DF PROTO=TCP SPT=46444 DPT=9105 SEQ=575651890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758494D90000000001030307) 
Nov 23 04:16:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54893 DF PROTO=TCP SPT=52488 DPT=9102 SEQ=1328153486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758496D90000000001030307) 
Nov 23 04:16:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:16:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40898 DF PROTO=TCP SPT=59746 DPT=9882 SEQ=3060292168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584A1EC0000000001030307) 
Nov 23 04:16:41 localhost sshd[139031]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:16:41 localhost python3.9[139051]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:42 localhost python3.9[139126]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889401.3909726-611-233180164414559/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2008 DF PROTO=TCP SPT=50660 DPT=9101 SEQ=3005462349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584AA590000000001030307) 
Nov 23 04:16:44 localhost python3.9[139219]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:16:44 localhost systemd[1]: Reloading OpenSSH server daemon...
Nov 23 04:16:44 localhost systemd[1]: Reloaded OpenSSH server daemon.
Nov 23 04:16:44 localhost sshd[118615]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:16:44 localhost python3.9[139315]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:45 localhost python3.9[139407]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:46 localhost python3.9[139480]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889405.2374249-707-60413240466854/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2009 DF PROTO=TCP SPT=50660 DPT=9101 SEQ=3005462349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584BA190000000001030307) 
Nov 23 04:16:47 localhost python3.9[139572]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 23 04:16:47 localhost systemd[1]: Starting Time & Date Service...
Nov 23 04:16:47 localhost systemd[1]: Started Time & Date Service.
Nov 23 04:16:48 localhost python3.9[139668]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:50 localhost python3.9[139760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49632 DF PROTO=TCP SPT=49042 DPT=9100 SEQ=3733894054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584C4870000000001030307) 
Nov 23 04:16:50 localhost python3.9[139833]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889409.594363-808-203009235008333/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:51 localhost python3.9[139925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:51 localhost python3.9[139998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889410.9446354-854-214853063511304/.source.yaml _original_basename=.p3bw4dy9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:52 localhost python3.9[140090]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:53 localhost python3.9[140165]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889412.111096-900-117020861214410/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49634 DF PROTO=TCP SPT=49042 DPT=9100 SEQ=3733894054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584D09A0000000001030307) 
Nov 23 04:16:53 localhost python3.9[140257]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:16:54 localhost python3.9[140350]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:16:55 localhost python3[140443]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 04:16:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52487 DF PROTO=TCP SPT=55190 DPT=9105 SEQ=3343729111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7584D8D90000000001030307) 
Nov 23 04:16:55 localhost python3.9[140535]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:56 localhost python3.9[140608]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889415.5106635-1016-182342318200494/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:57 localhost python3.9[140700]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:57 localhost python3.9[140773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889416.772187-1061-247997066309410/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:58 localhost python3.9[140865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:16:58 localhost sshd[140906]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:16:58 localhost python3.9[140940]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889417.9367585-1106-195545888495783/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:16:59 localhost python3.9[141032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:17:01 localhost python3.9[141105]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889419.1721802-1151-211359040291887/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:01 localhost python3.9[141197]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:17:02 localhost python3.9[141270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889421.417578-1196-137902962392943/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:03 localhost python3.9[141362]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:04 localhost sshd[141455]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:17:04 localhost python3.9[141454]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:17:05 localhost python3.9[141551]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:06 localhost python3.9[141644]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:06 localhost python3.9[141736]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:07 localhost python3.9[141828]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 04:17:08 localhost python3.9[141921]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 23 04:17:09 localhost systemd[1]: session-43.scope: Deactivated successfully.
Nov 23 04:17:09 localhost systemd[1]: session-43.scope: Consumed 29.015s CPU time.
Nov 23 04:17:09 localhost systemd-logind[761]: Session 43 logged out. Waiting for processes to exit.
Nov 23 04:17:09 localhost systemd-logind[761]: Removed session 43.
Nov 23 04:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47942 DF PROTO=TCP SPT=38730 DPT=9101 SEQ=4232969422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758513640000000001030307) 
Nov 23 04:17:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3665 DF PROTO=TCP SPT=58880 DPT=9882 SEQ=1018970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585171C0000000001030307) 
Nov 23 04:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24425 DF PROTO=TCP SPT=32916 DPT=9101 SEQ=2121487875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758522D90000000001030307) 
Nov 23 04:17:14 localhost sshd[141937]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:17:15 localhost systemd-logind[761]: New session 44 of user zuul.
Nov 23 04:17:15 localhost systemd[1]: Started Session 44 of User zuul.
Nov 23 04:17:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15752 DF PROTO=TCP SPT=41642 DPT=9882 SEQ=3368423546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758526D90000000001030307) 
Nov 23 04:17:16 localhost python3.9[142032]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 23 04:17:17 localhost python3.9[142124]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:17:17 localhost systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 23 04:17:19 localhost python3.9[142220]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 23 04:17:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34981 DF PROTO=TCP SPT=33782 DPT=9100 SEQ=2818816777 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758539B70000000001030307) 
Nov 23 04:17:20 localhost systemd[1]: tmp-crun.WVOB3b.mount: Deactivated successfully.
Nov 23 04:17:20 localhost podman[142336]: 2025-11-23 09:17:20.584722546 +0000 UTC m=+0.095367407 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git)
Nov 23 04:17:20 localhost podman[142336]: 2025-11-23 09:17:20.687792887 +0000 UTC m=+0.198437768 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=)
Nov 23 04:17:21 localhost python3.9[142477]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.gzj6qdsf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:17:21 localhost python3.9[142608]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.gzj6qdsf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889440.6773064-191-196015885501958/.source.gzj6qdsf _original_basename=.a2qkf0hm follow=False checksum=86d7095ff15f9038e30789829322247c323137f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15396 DF PROTO=TCP SPT=52124 DPT=9105 SEQ=1250006399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758542130000000001030307) 
Nov 23 04:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52302 DF PROTO=TCP SPT=40160 DPT=9102 SEQ=2355702886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758545F00000000001030307) 
Nov 23 04:17:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49382 DF PROTO=TCP SPT=51882 DPT=9100 SEQ=1657782759 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758548D90000000001030307) 
Nov 23 04:17:24 localhost python3.9[142721]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:17:26 localhost python3.9[142813]: ansible-ansible.builtin.blockinfile Invoked with block=np0005532581.localdomain,192.168.122.103,np0005532581* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRibSMIP5+E9lJWuaKDEuCaJoGhGPTqff+o8SP2Twk+NhPOa5FC7WQhHPLXVhKAtlCX60ckYE53Q/H/RVRZ55JdWQLSdY/1tQCD6c0Ry6N+UD+mxo9iN9cHk6vd6J5kJu+v/gBEmFY1A9pjzsD1CTR8gZJHZFqbUTzXrKkoUjK3Kqa8UtvzyhgYQtYIaUwaf1z7CMNQ3A4EaGVKyRsVwb11jlaT9fjB43E3tp9p5EG6PPJEGux/Xea6iHnhSwZHpkD/ylneDOkBbGvYKhL33bpXMcbuHy32jAFr+2Q07sKvgy/b5/f/nTgNCyxEIpoXUbEhX+Vlh+gycU7KJw6FRyR3dQFjooV97NQ/oov2VP9DnTObziZA8lhaJ20ChTfDVUyvFCFi3dKgBUPCeNWCGI69eNHu3dQcwCNJ3kANqhHdkYpBd00PVBritJfxfzH1DCLo0I9CSi1buWYhein9VHZWtzePv/+ucWERRIo+J04QPkV+6P6vgOTRl5U75RctJU=#012np0005532581.localdomain,192.168.122.103,np0005532581* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG7auGqCubvIeT+Z8+DFgAyuqWDpDfRlZtndf8hFQOt7#012np0005532581.localdomain,192.168.122.103,np0005532581* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGIbd1xFE29cgvdOZ+Uh6ipkdk4QfLnBLiJP+rzeHVtOUTgjR98CvJhrHQdGAxaTty6xRV53oj5EhBdMCJFc5I=#012np0005532584.localdomain,192.168.122.106,np0005532584* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3OrbPXlomvlluk5pGQwXwJu+cR1IMLHg5EnGcI5epB1SB6q/EzlEo5+bOYmmvILsoesUzBIBq21mRhn1Wi2yjlys0pArFDqiLkUBvTW9ro6MKci9Smc12m7AkLus6UO6h3pzqcOdRZQ3KOQDL/83yYJVBCJyqlISXWzzHJpGRVnZHeT4CgKZ1nG5UEvOrtPXRAVWkz3v5TghJrYXvWaPQPmWcEy1rfhCjkCfQY++JB/Dlgammmd1+ZldadeXQi1b2X02a6GFyW0pUMFLjAP7Wr+KcRa5FIPmGwsPuc1NhveAH6zyLrabrh7jPR5O0tBjz9KcNYXbQmJetGt9ZWzFsl0qzXrvI38q5RlGptbqg0iSez61VBAUtnfs33hnYc3dvzJKXReR76PoU3yu/tLrhdK6szqIVsMdw2LGEro7l3KKMKXHSpi8n77fH8ICiU3F5Oif+nvS/e7xr4LccSEnFEHA9PdNxOWxJYLcxTQCt3BkNFrWw4oB1LiDsn98HlS8=#012np0005532584.localdomain,192.168.122.106,np0005532584* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACxkoVt3BLqmT5JuJibOj2srWJ99rHYxhxT/gCbLdIM#012np0005532584.localdomain,192.168.122.106,np0005532584* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJi5N6oeJPjl3EunvvHi6baJIH9ibE30q8MR/UiZkuoStWh4NAj+cNFWO47723JbHkDzCF1p+3RJ1FLROkiZ4W0=#012np0005532583.localdomain,192.168.122.105,np0005532583* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkB1Cq8AQaEBYTlv5Hzs024jg//D6wieNnvsI5WcYj7wckm9vKTJQfUD6yZBMmyPw6+vVzsM16bj2hagkDR5wkO7uSIaMqWrcoQ1h9HkJQLK8QB0iuzUvQzdr22kUgkLII8thNHK4VxF4VhAKNmzqCofZ4ZSaLUMwauFCFUjx1VJISEZdgYRZ4+++wAN5bdK+WrwSOAHJYJWQX2pRRsPiunSdY1BOUKB3sp7IBcQ3MDJgnKlkR7tiGSYB2W8JsLvIsIb0I2EaqmPUTIzKUuxSJnWEls/WyDT9MNkjhobVeAyFZ5TEik4OvobUhVGJ8CsU7O101KQNQ3IywPM+V0UpjA1yK49z5Qs0LjApmqORsTcjOojYaKGr9n64dVjXdFOMwajB9UmMEFtlIngm6kx7mJQGXqYxVAscW34JY832iKOEzQWrUSdo6mVJ7TXhYYcbdFp+G/128SfhNrbHwKinHeE9Nqu48BR7bmRZXO7ef+UMY1dG3AIvFt4JwFvLihZc=#012np0005532583.localdomain,192.168.122.105,np0005532583* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH4H0HJaVZZzbQbH92x/ePbqiic7VLTV0Kle7XvCiMNK#012np0005532583.localdomain,192.168.122.105,np0005532583* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEb0/1S3v0DC07ZQnLEp9URjtv9BKwGlPRsb47Ua8w+WgbOM0JmtKaPebzMcBow+04/+k7+HcCDBj6p5Yd4q3M4=#012np0005532585.localdomain,192.168.122.107,np0005532585* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCU6ocW8HWtJJyWPSFUqcN5z70XYnNrE5KeWh/VJ4bDkpVePpxxcdD8r8cKL121q0MKPRgia3jLqnKz+o4MH3AqTAWCZamBc1+ePq9OvZDenK69byea8TM176uYzfePjNlud4LSZ6lfkgneO5jeNE6/RcHgBc8Me+2mlzpavioA814r6Ci6hFaEIOS1Zd2b/yKzI4QRl6xg/aJKvlIe9w3G3BvKOG5pixPx2ng4wYc0OMtJb9ItJgZLY92GGuvVRwn9e0D4lab84+x/Nn3XatQdqU69ev7da/bQCUeBivyEZo03olh56YxCKvNfG3ZYwwhMTn9Hg/EdnwrGHYHj0ZgfSR1+Dzvnk0WW/MRs0276Ojj5O0hhnlaAh5n97W6fgHldGKvdEafYeD602C1Zkd+ISqF13W56MWhtUhiUsdUHShnpM/EBOITg6mTDFP1i/qMS0PjRaCzBpdqpJIoKzQpsi4Z3QTHTZ7uK/lqOEaE/wqXHuYlMKcTuOuX33gIp28k=#012np0005532585.localdomain,192.168.122.107,np0005532585* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpJc3/w4q1RFXE8+NzyjCJ0R7ySeHFy75KPVpy/YiB/#012np0005532585.localdomain,192.168.122.107,np0005532585* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLtz4IM2aQZoQ7CuTS4jfYDH5LZPyutyvm+ZyFuW7jdHvK3umSrNYFwsqiHwWHvM9peuWot0GAUC8rCc1UO+ZWk=#012np0005532586.localdomain,192.168.122.108,np0005532586* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6U4JggC29IKqxQ7GjhK23AehQb1S2zLryOxLwLEs9rP0qOZpJ9wR1VsBNLXDCmoRVTsH2+3V00hmkvlanKUuzgmLO61hdur+5NQD0xHnY7lOLpOoyR7hJiMuHj/nRgBLWY2OB8Gim121dgfuc2zRF92igDYe65Uf0et83vWlgRmc7KlziaJ91iVcBUmhGYf3Ij7QxfhQH5TTnGoQizdiBpuP+yVuU2AepbvQ8ZFvzioCwzWAVu/xfdRFp9QyLT4JP1jM6dadTjD5RUAjRL6qR1tLXVq/rvqtXSL8ruBSYm3NCOys9RtdrNolZ7frd+zmvF+VzMNLtlRxiuy1ReR+ZO3felB+4TwfEfLZ+DqE1s3+ksCQH/sVCrxzFsRz5lamWG3p78ZBWTiQ/7WdJS1dQOHz+pKNSSW/NYMIqitxsCsEWPJLq/EWoHVxvjREucCb5YvWHPKOv5RLlbm5lSHFLuFVV8O3AAzD/3JsjTbKGOjJhmtxPCgEy7RPqtIUX90s=#012np0005532586.localdomain,192.168.122.108,np0005532586* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKzaUMbW2RXGluOr1nHypPwK+dIm5zaIFHsNA8PEtRqK#012np0005532586.localdomain,192.168.122.108,np0005532586* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLaE/jo8XH2dLl/mTc9NRhBP3x+ig/gy7tepiJNCqlj0Dgb5vfu6IYaFNrkyisiqhenCsUZQo/guhdX9Nisv9I=#012np0005532582.localdomain,192.168.122.104,np0005532582* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0v47OVdr7YS/5xSUmMc7u26O7OwPomkdDR6s8rrcencbx7seRSeU00QGeRQcJJ023bD3xk26W8iiJTRUDkYSy//cSfHODdDy+CNEfDUTkGzIjiApoLi2b+S4J6wcAldMsj02MZmx67vUHyM5Qwok+22XqopryL8BiGPJbnoUcZy773f5OKPPMNuj3Fyb7jd5mrC7awK4NniZHyHPYBQeBa234HL42fRjcOqCcxuauy5cbz9PeBv5/kg+nYc8cY5qCyLqNhzMVRUa/PcepMBcfThk17LtPGzCYS7IR2cGdUDP6Pe0QD34Hu6+mpwKwYx73v5uHcmy9CeZ8fK83/F84Lr6jxsiwoU2e+hUfzVRq8gnkjk6kuL86eSM2POSGgBYYgCb+Ma6lOkF1MA+rLAh0gAsUhBgVlz6HtaMoDvLOi/NrQeoQyNE1Pv4vPAndmGGc8A7JCtmCMk9VvMy0Ht4IOvtDJFfx1lg7NuMIKqePYTEk56p8wTUNM+BmdJEhFPU=#012np0005532582.localdomain,192.168.122.104,np0005532582* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEaLDeiqlvIGmYCK/pVle4dWQoWUl9JopG1HgV4OQwpm#012np0005532582.localdomain,192.168.122.104,np0005532582* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPG4t0LXPuGTxEFWkant9P4DDIM9mUsBdh3iJHN1QOZUHW9RJuWVAPGkYlb6jz2BktGBRNU2FJD+HyIE3L+OanQ=#012 create=True mode=0644 path=/tmp/ansible.gzj6qdsf state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:27 localhost python3.9[142905]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gzj6qdsf' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:17:28 localhost python3.9[142999]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gzj6qdsf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:29 localhost systemd[1]: session-44.scope: Deactivated successfully.
Nov 23 04:17:29 localhost systemd[1]: session-44.scope: Consumed 4.330s CPU time.
Nov 23 04:17:29 localhost systemd-logind[761]: Session 44 logged out. Waiting for processes to exit.
Nov 23 04:17:29 localhost systemd-logind[761]: Removed session 44.
Nov 23 04:17:36 localhost sshd[143014]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:17:36 localhost systemd-logind[761]: New session 45 of user zuul.
Nov 23 04:17:36 localhost systemd[1]: Started Session 45 of User zuul.
Nov 23 04:17:37 localhost python3.9[143107]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:17:38 localhost python3.9[143203]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 04:17:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55732 DF PROTO=TCP SPT=45298 DPT=9101 SEQ=2495282686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758588950000000001030307) 
Nov 23 04:17:40 localhost python3.9[143297]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:17:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32851 DF PROTO=TCP SPT=49080 DPT=9882 SEQ=3563360548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75858C4D0000000001030307) 
Nov 23 04:17:41 localhost python3.9[143390]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:17:42 localhost python3.9[143483]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:17:42 localhost python3.9[143577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:17:43 localhost python3.9[143672]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:17:44 localhost systemd[1]: session-45.scope: Deactivated successfully.
Nov 23 04:17:44 localhost systemd[1]: session-45.scope: Consumed 3.921s CPU time.
Nov 23 04:17:44 localhost systemd-logind[761]: Session 45 logged out. Waiting for processes to exit.
Nov 23 04:17:44 localhost systemd-logind[761]: Removed session 45.
Nov 23 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51861 DF PROTO=TCP SPT=59946 DPT=9100 SEQ=3201619457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585AEE60000000001030307) 
Nov 23 04:17:50 localhost sshd[143687]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:17:50 localhost systemd-logind[761]: New session 46 of user zuul.
Nov 23 04:17:50 localhost systemd[1]: Started Session 46 of User zuul.
Nov 23 04:17:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51862 DF PROTO=TCP SPT=59946 DPT=9100 SEQ=3201619457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585B2D90000000001030307) 
Nov 23 04:17:51 localhost python3.9[143780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:17:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14473 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=794576838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585B7440000000001030307) 
Nov 23 04:17:52 localhost python3.9[143876]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51863 DF PROTO=TCP SPT=59946 DPT=9100 SEQ=3201619457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585BAD90000000001030307) 
Nov 23 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36971 DF PROTO=TCP SPT=48434 DPT=9102 SEQ=219501030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585BB200000000001030307) 
Nov 23 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14474 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=794576838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585BB590000000001030307) 
Nov 23 04:17:53 localhost python3.9[143930]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 23 04:17:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36972 DF PROTO=TCP SPT=48434 DPT=9102 SEQ=219501030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585BF190000000001030307) 
Nov 23 04:17:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14475 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=794576838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585C3590000000001030307) 
Nov 23 04:17:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36973 DF PROTO=TCP SPT=48434 DPT=9102 SEQ=219501030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585C7190000000001030307) 
Nov 23 04:17:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51864 DF PROTO=TCP SPT=59946 DPT=9100 SEQ=3201619457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585CA990000000001030307) 
Nov 23 04:17:58 localhost python3.9[144022]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:17:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14476 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=794576838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585D3190000000001030307) 
Nov 23 04:18:00 localhost python3.9[144115]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:01 localhost python3.9[144207]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:02 localhost python3.9[144299]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012  * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:02 localhost python3.9[144389]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:18:03 localhost python3.9[144479]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:18:04 localhost python3.9[144571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:18:04 localhost systemd[1]: session-46.scope: Deactivated successfully.
Nov 23 04:18:04 localhost systemd[1]: session-46.scope: Consumed 9.603s CPU time.
Nov 23 04:18:04 localhost systemd-logind[761]: Session 46 logged out. Waiting for processes to exit.
Nov 23 04:18:04 localhost systemd-logind[761]: Removed session 46.
Nov 23 04:18:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51865 DF PROTO=TCP SPT=59946 DPT=9100 SEQ=3201619457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585EADA0000000001030307) 
Nov 23 04:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14477 DF PROTO=TCP SPT=42520 DPT=9105 SEQ=794576838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585F2D90000000001030307) 
Nov 23 04:18:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36975 DF PROTO=TCP SPT=48434 DPT=9102 SEQ=219501030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7585F6D90000000001030307) 
Nov 23 04:18:10 localhost sshd[144586]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:10 localhost systemd-logind[761]: New session 47 of user zuul.
Nov 23 04:18:10 localhost systemd[1]: Started Session 47 of User zuul.
Nov 23 04:18:10 localhost sshd[144622]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51819 DF PROTO=TCP SPT=40428 DPT=9882 SEQ=382687633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586017C0000000001030307) 
Nov 23 04:18:12 localhost python3.9[144681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6458 DF PROTO=TCP SPT=60772 DPT=9101 SEQ=1320479941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758609D90000000001030307) 
Nov 23 04:18:14 localhost python3.9[144777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:15 localhost python3.9[144869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:16 localhost chronyd[136658]: Selected source 23.133.168.245 (pool.ntp.org)
Nov 23 04:18:16 localhost python3.9[144942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889495.063676-183-217101813693078/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:17 localhost python3.9[145034]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6459 DF PROTO=TCP SPT=60772 DPT=9101 SEQ=1320479941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758619990000000001030307) 
Nov 23 04:18:17 localhost python3.9[145126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:18 localhost python3.9[145199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889497.2556744-262-252951971662879/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:18 localhost python3.9[145291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:19 localhost python3.9[145383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37565 DF PROTO=TCP SPT=34728 DPT=9100 SEQ=5492243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758624170000000001030307) 
Nov 23 04:18:20 localhost python3.9[145457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889499.142289-338-20336771205759/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:20 localhost sshd[145550]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:20 localhost python3.9[145549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:21 localhost python3.9[145643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:22 localhost python3.9[145716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889501.0676205-412-22165167793516/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:22 localhost python3.9[145838]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37567 DF PROTO=TCP SPT=34728 DPT=9100 SEQ=5492243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758630190000000001030307) 
Nov 23 04:18:23 localhost python3.9[145962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:24 localhost python3.9[146036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889502.9912114-484-215599688311909/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:24 localhost python3.9[146142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50093 DF PROTO=TCP SPT=40772 DPT=9105 SEQ=1237308446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758638990000000001030307) 
Nov 23 04:18:25 localhost python3.9[146234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:26 localhost python3.9[146307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889505.0048761-561-203671220360243/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:26 localhost python3.9[146399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:27 localhost python3.9[146491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:28 localhost python3.9[146564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889507.0481057-639-147110874330055/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:28 localhost python3.9[146656]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50094 DF PROTO=TCP SPT=40772 DPT=9105 SEQ=1237308446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758648590000000001030307) 
Nov 23 04:18:29 localhost python3.9[146748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:29 localhost sshd[146749]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:30 localhost python3.9[146823]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889509.0015943-714-75158821752310/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=85f08144eae371e6c6843864c2d75f3a0dbb50ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:30 localhost systemd[1]: session-47.scope: Deactivated successfully.
Nov 23 04:18:30 localhost systemd[1]: session-47.scope: Consumed 12.526s CPU time.
Nov 23 04:18:30 localhost systemd-logind[761]: Session 47 logged out. Waiting for processes to exit.
Nov 23 04:18:30 localhost systemd-logind[761]: Removed session 47.
Nov 23 04:18:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37569 DF PROTO=TCP SPT=34728 DPT=9100 SEQ=5492243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758660D90000000001030307) 
Nov 23 04:18:36 localhost sshd[146839]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:36 localhost systemd-logind[761]: New session 48 of user zuul.
Nov 23 04:18:36 localhost systemd[1]: Started Session 48 of User zuul.
Nov 23 04:18:37 localhost python3.9[146934]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50095 DF PROTO=TCP SPT=40772 DPT=9105 SEQ=1237308446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758668D90000000001030307) 
Nov 23 04:18:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64174 DF PROTO=TCP SPT=50938 DPT=9102 SEQ=1391549412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75866CDA0000000001030307) 
Nov 23 04:18:38 localhost python3.9[147026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:39 localhost python3.9[147099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889518.2286885-64-264459787366458/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5f137984986c8cf5df5aec7749430e0dc129d0db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:40 localhost python3.9[147191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:18:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19285 DF PROTO=TCP SPT=40822 DPT=9882 SEQ=3692094941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758676AC0000000001030307) 
Nov 23 04:18:41 localhost python3.9[147264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889519.6573124-64-258961165386956/.source.conf _original_basename=ceph.conf follow=False checksum=d6d906a745260c838693e085b1f329bd1daad564 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:18:41 localhost systemd-logind[761]: Session 48 logged out. Waiting for processes to exit.
Nov 23 04:18:41 localhost systemd[1]: session-48.scope: Deactivated successfully.
Nov 23 04:18:41 localhost systemd[1]: session-48.scope: Consumed 2.452s CPU time.
Nov 23 04:18:41 localhost systemd-logind[761]: Removed session 48.
Nov 23 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12138 DF PROTO=TCP SPT=50708 DPT=9101 SEQ=1237402201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75867F190000000001030307) 
Nov 23 04:18:47 localhost sshd[147279]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:18:47 localhost systemd-logind[761]: New session 49 of user zuul.
Nov 23 04:18:47 localhost systemd[1]: Started Session 49 of User zuul.
Nov 23 04:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12139 DF PROTO=TCP SPT=50708 DPT=9101 SEQ=1237402201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75868EDA0000000001030307) 
Nov 23 04:18:48 localhost python3.9[147372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:18:49 localhost python3.9[147468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:50 localhost python3.9[147560]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:18:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49897 DF PROTO=TCP SPT=44662 DPT=9100 SEQ=2114565468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758699460000000001030307) 
Nov 23 04:18:51 localhost python3.9[147650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:18:52 localhost python3.9[147742]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 04:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49899 DF PROTO=TCP SPT=44662 DPT=9100 SEQ=2114565468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586A5590000000001030307) 
Nov 23 04:18:53 localhost python3.9[147834]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:18:54 localhost python3.9[147888]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:18:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=593 DF PROTO=TCP SPT=35872 DPT=9105 SEQ=15169503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586AD990000000001030307) 
Nov 23 04:18:59 localhost python3.9[147982]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:18:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=594 DF PROTO=TCP SPT=35872 DPT=9105 SEQ=15169503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586BD590000000001030307) 
Nov 23 04:19:01 localhost python3[148077]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 23 04:19:02 localhost python3.9[148169]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:02 localhost python3.9[148261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:03 localhost python3.9[148309]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:04 localhost python3.9[148401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:04 localhost python3.9[148449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wbauj3sm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49901 DF PROTO=TCP SPT=44662 DPT=9100 SEQ=2114565468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586D4D90000000001030307) 
Nov 23 04:19:06 localhost python3.9[148541]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:06 localhost python3.9[148589]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:07 localhost python3.9[148681]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=595 DF PROTO=TCP SPT=35872 DPT=9105 SEQ=15169503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586DCD90000000001030307) 
Nov 23 04:19:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2716 DF PROTO=TCP SPT=48048 DPT=9102 SEQ=3400940773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586E0D90000000001030307) 
Nov 23 04:19:09 localhost python3[148774]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 04:19:09 localhost python3.9[148866]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:10 localhost python3.9[148941]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889549.3170352-434-79704619125886/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:11 localhost python3.9[149033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18501 DF PROTO=TCP SPT=53370 DPT=9882 SEQ=1304224491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586EBDC0000000001030307) 
Nov 23 04:19:11 localhost python3.9[149108]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889550.6954494-478-5263465099746/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:12 localhost python3.9[149200]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57424 DF PROTO=TCP SPT=42114 DPT=9101 SEQ=4046811248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7586F4190000000001030307) 
Nov 23 04:19:13 localhost python3.9[149275]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889552.3866923-523-255647614346935/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:14 localhost python3.9[149367]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:14 localhost python3.9[149442]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889553.5957341-568-73041027889754/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:15 localhost python3.9[149534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57425 DF PROTO=TCP SPT=42114 DPT=9101 SEQ=4046811248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758703D90000000001030307) 
Nov 23 04:19:18 localhost python3.9[149610]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889554.8931632-613-65214799588125/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:19 localhost python3.9[149702]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2021 DF PROTO=TCP SPT=42220 DPT=9100 SEQ=3262603207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75870E760000000001030307) 
Nov 23 04:19:20 localhost python3.9[149794]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:21 localhost python3.9[149889]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:22 localhost python3.9[149981]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:23 localhost python3.9[150074]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2023 DF PROTO=TCP SPT=42220 DPT=9100 SEQ=3262603207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75871A990000000001030307) 
Nov 23 04:19:23 localhost python3.9[150168]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:24 localhost python3.9[150293]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3567 DF PROTO=TCP SPT=39714 DPT=9105 SEQ=1818934581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758722D90000000001030307) 
Nov 23 04:19:25 localhost python3.9[150414]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:19:27 localhost python3.9[150522]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005532586.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:1d:b8:fa:41" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:27 localhost ovs-vsctl[150523]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005532586.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:1d:b8:fa:41 external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 23 04:19:27 localhost sshd[150562]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:19:27 localhost python3.9[150617]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:19:28 localhost python3.9[150710]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:19:29 localhost python3.9[150804]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3568 DF PROTO=TCP SPT=39714 DPT=9105 SEQ=1818934581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758732990000000001030307) 
Nov 23 04:19:30 localhost python3.9[150896]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:30 localhost python3.9[150944]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:31 localhost python3.9[151036]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:31 localhost python3.9[151084]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:32 localhost python3.9[151176]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:33 localhost python3.9[151268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:33 localhost python3.9[151316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:34 localhost python3.9[151408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2025 DF PROTO=TCP SPT=42220 DPT=9100 SEQ=3262603207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75874AD90000000001030307) 
Nov 23 04:19:35 localhost python3.9[151456]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:36 localhost python3.9[151548]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:19:36 localhost systemd[1]: Reloading.
Nov 23 04:19:36 localhost systemd-rc-local-generator[151572]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:19:36 localhost systemd-sysv-generator[151578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:19:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:19:37 localhost python3.9[151678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3569 DF PROTO=TCP SPT=39714 DPT=9105 SEQ=1818934581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758752D90000000001030307) 
Nov 23 04:19:37 localhost python3.9[151726]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:38 localhost python3.9[151818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20260 DF PROTO=TCP SPT=40354 DPT=9102 SEQ=2326647127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758756DA0000000001030307) 
Nov 23 04:19:39 localhost python3.9[151866]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:39 localhost python3.9[151958]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:19:39 localhost systemd[1]: Reloading.
Nov 23 04:19:39 localhost systemd-sysv-generator[151985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:19:39 localhost systemd-rc-local-generator[151981]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:19:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:19:40 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:19:40 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:19:40 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:19:40 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:19:40 localhost sshd[152016]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:19:41 localhost python3.9[152095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38380 DF PROTO=TCP SPT=32782 DPT=9882 SEQ=962689768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587610C0000000001030307) 
Nov 23 04:19:41 localhost python3.9[152187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:42 localhost python3.9[152260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889581.253663-1346-47290495888989/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:43 localhost python3.9[152352]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52914 DF PROTO=TCP SPT=39842 DPT=9101 SEQ=370034401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587695A0000000001030307) 
Nov 23 04:19:44 localhost python3.9[152444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:19:44 localhost python3.9[152519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889583.5516968-1420-63378855007731/.source.json _original_basename=.e1uqvq8f follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:45 localhost python3.9[152611]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52915 DF PROTO=TCP SPT=39842 DPT=9101 SEQ=370034401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587791A0000000001030307) 
Nov 23 04:19:48 localhost python3.9[152868]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 23 04:19:49 localhost python3.9[152960]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:19:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40781 DF PROTO=TCP SPT=43868 DPT=9100 SEQ=3101395272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758783A60000000001030307) 
Nov 23 04:19:50 localhost python3.9[153052]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40783 DF PROTO=TCP SPT=43868 DPT=9100 SEQ=3101395272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75878F990000000001030307) 
Nov 23 04:19:54 localhost sshd[153173]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:19:54 localhost python3[153172]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:19:55 localhost python3[153172]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c",#012          "Digest": "sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:40:43.504967825Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345731014,#012          "VirtualSize": 345731014,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:2e0f9ca9a8387a3566096aacaecfe5797e3fc2585f07cb97a1706897fa1a86a3",#012                    "sha256:db37b2d335b44e6a9cb2eb88713051bc469233d1e0a06670f1303bc9539b97a0"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:39.924297673Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-li
Nov 23 04:19:55 localhost podman[153226]: 2025-11-23 09:19:55.280211848 +0000 UTC m=+0.080398569 container remove 838c9246e128e3262727676938c60d529d5121aaeb4a3400eda98359d695f4fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=)
Nov 23 04:19:55 localhost python3[153172]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Nov 23 04:19:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9771 DF PROTO=TCP SPT=35446 DPT=9105 SEQ=2118953672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587981A0000000001030307) 
Nov 23 04:19:55 localhost podman[153240]: 
Nov 23 04:19:55 localhost podman[153240]: 2025-11-23 09:19:55.391229875 +0000 UTC m=+0.090729549 container create eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:19:55 localhost podman[153240]: 2025-11-23 09:19:55.349602759 +0000 UTC m=+0.049102423 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 23 04:19:55 localhost python3[153172]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 23 04:19:56 localhost python3.9[153370]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:19:59 localhost python3.9[153464]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:19:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9772 DF PROTO=TCP SPT=35446 DPT=9105 SEQ=2118953672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587A7D90000000001030307) 
Nov 23 04:19:59 localhost python3.9[153510]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:20:00 localhost python3.9[153601]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763889599.7547638-1684-230701442976044/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:01 localhost python3.9[153647]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:20:01 localhost systemd[1]: Reloading.
Nov 23 04:20:01 localhost systemd-sysv-generator[153673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:20:01 localhost systemd-rc-local-generator[153668]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:20:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:20:02 localhost python3.9[153729]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:20:02 localhost systemd[1]: Reloading.
Nov 23 04:20:02 localhost systemd-rc-local-generator[153759]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:20:02 localhost systemd-sysv-generator[153763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:20:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:20:02 localhost systemd[1]: Starting ovn_controller container...
Nov 23 04:20:02 localhost systemd[1]: Started libcrun container.
Nov 23 04:20:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f197b524e03e7de2286249b87158b862ce5f4300ea0a28a701d61eb9ee3feefe/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 04:20:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:20:02 localhost podman[153771]: 2025-11-23 09:20:02.655391388 +0000 UTC m=+0.137310580 container init eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 04:20:02 localhost ovn_controller[153786]: + sudo -E kolla_set_configs
Nov 23 04:20:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:20:02 localhost podman[153771]: 2025-11-23 09:20:02.693363618 +0000 UTC m=+0.175282780 container start eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:20:02 localhost edpm-start-podman-container[153771]: ovn_controller
Nov 23 04:20:02 localhost systemd[1]: Created slice User Slice of UID 0.
Nov 23 04:20:02 localhost systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 23 04:20:02 localhost systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 23 04:20:02 localhost systemd[1]: Starting User Manager for UID 0...
Nov 23 04:20:02 localhost podman[153793]: 2025-11-23 09:20:02.791102929 +0000 UTC m=+0.090584355 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:20:02 localhost edpm-start-podman-container[153770]: Creating additional drop-in dependency for "ovn_controller" (eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c)
Nov 23 04:20:02 localhost systemd[1]: Reloading.
Nov 23 04:20:02 localhost podman[153793]: 2025-11-23 09:20:02.875850355 +0000 UTC m=+0.175331791 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:20:02 localhost podman[153793]: unhealthy
Nov 23 04:20:02 localhost systemd[153820]: Queued start job for default target Main User Target.
Nov 23 04:20:02 localhost systemd[153820]: Created slice User Application Slice.
Nov 23 04:20:02 localhost systemd[153820]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 23 04:20:02 localhost systemd[153820]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 04:20:02 localhost systemd[153820]: Reached target Paths.
Nov 23 04:20:02 localhost systemd[153820]: Reached target Timers.
Nov 23 04:20:02 localhost systemd[153820]: Starting D-Bus User Message Bus Socket...
Nov 23 04:20:02 localhost systemd[153820]: Starting Create User's Volatile Files and Directories...
Nov 23 04:20:02 localhost systemd[153820]: Finished Create User's Volatile Files and Directories.
Nov 23 04:20:02 localhost systemd[153820]: Listening on D-Bus User Message Bus Socket.
Nov 23 04:20:02 localhost systemd[153820]: Reached target Sockets.
Nov 23 04:20:02 localhost systemd[153820]: Reached target Basic System.
Nov 23 04:20:02 localhost systemd[153820]: Reached target Main User Target.
Nov 23 04:20:02 localhost systemd[153820]: Startup finished in 127ms.
Nov 23 04:20:02 localhost systemd-sysv-generator[153871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:20:02 localhost systemd-rc-local-generator[153868]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:20:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:20:03 localhost systemd[1]: tmp-crun.2dgBvj.mount: Deactivated successfully.
Nov 23 04:20:03 localhost systemd[1]: Started User Manager for UID 0.
Nov 23 04:20:03 localhost systemd[1]: Started ovn_controller container.
Nov 23 04:20:03 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:20:03 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Failed with result 'exit-code'.
Nov 23 04:20:03 localhost systemd[1]: Started Session c11 of User root.
Nov 23 04:20:03 localhost ovn_controller[153786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:20:03 localhost ovn_controller[153786]: INFO:__main__:Validating config file
Nov 23 04:20:03 localhost ovn_controller[153786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:20:03 localhost ovn_controller[153786]: INFO:__main__:Writing out command to execute
Nov 23 04:20:03 localhost systemd[1]: session-c11.scope: Deactivated successfully.
Nov 23 04:20:03 localhost ovn_controller[153786]: ++ cat /run_command
Nov 23 04:20:03 localhost ovn_controller[153786]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Nov 23 04:20:03 localhost ovn_controller[153786]: + ARGS=
Nov 23 04:20:03 localhost ovn_controller[153786]: + sudo kolla_copy_cacerts
Nov 23 04:20:03 localhost systemd[1]: Started Session c12 of User root.
Nov 23 04:20:03 localhost systemd[1]: session-c12.scope: Deactivated successfully.
Nov 23 04:20:03 localhost ovn_controller[153786]: + [[ ! -n '' ]]
Nov 23 04:20:03 localhost ovn_controller[153786]: + . kolla_extend_start
Nov 23 04:20:03 localhost ovn_controller[153786]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Nov 23 04:20:03 localhost ovn_controller[153786]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Nov 23 04:20:03 localhost ovn_controller[153786]: + umask 0022
Nov 23 04:20:03 localhost ovn_controller[153786]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00013|main|INFO|OVS feature set changed, force recompute.
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00021|main|INFO|OVS feature set changed, force recompute.
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 04:20:03 localhost ovn_controller[153786]: 2025-11-23T09:20:03Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 23 04:20:04 localhost python3.9[153984]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:20:04 localhost ovs-vsctl[153985]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 23 04:20:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40785 DF PROTO=TCP SPT=43868 DPT=9100 SEQ=3101395272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587BED90000000001030307) 
Nov 23 04:20:05 localhost python3.9[154077]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:20:05 localhost ovs-vsctl[154079]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 23 04:20:06 localhost python3.9[154172]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:20:06 localhost ovs-vsctl[154173]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 23 04:20:07 localhost systemd[1]: session-49.scope: Deactivated successfully.
Nov 23 04:20:07 localhost systemd[1]: session-49.scope: Consumed 42.356s CPU time.
Nov 23 04:20:07 localhost systemd-logind[761]: Session 49 logged out. Waiting for processes to exit.
Nov 23 04:20:07 localhost systemd-logind[761]: Removed session 49.
Nov 23 04:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9773 DF PROTO=TCP SPT=35446 DPT=9105 SEQ=2118953672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587C8D90000000001030307) 
Nov 23 04:20:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2232 DF PROTO=TCP SPT=34230 DPT=9102 SEQ=619317374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587CCD90000000001030307) 
Nov 23 04:20:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50917 DF PROTO=TCP SPT=38648 DPT=9882 SEQ=1909779351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587D63C0000000001030307) 
Nov 23 04:20:13 localhost sshd[154188]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:20:13 localhost systemd-logind[761]: New session 51 of user zuul.
Nov 23 04:20:13 localhost systemd[1]: Started Session 51 of User zuul.
Nov 23 04:20:13 localhost systemd[1]: Stopping User Manager for UID 0...
Nov 23 04:20:13 localhost systemd[153820]: Activating special unit Exit the Session...
Nov 23 04:20:13 localhost systemd[153820]: Stopped target Main User Target.
Nov 23 04:20:13 localhost systemd[153820]: Stopped target Basic System.
Nov 23 04:20:13 localhost systemd[153820]: Stopped target Paths.
Nov 23 04:20:13 localhost systemd[153820]: Stopped target Sockets.
Nov 23 04:20:13 localhost systemd[153820]: Stopped target Timers.
Nov 23 04:20:13 localhost systemd[153820]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 04:20:13 localhost systemd[153820]: Closed D-Bus User Message Bus Socket.
Nov 23 04:20:13 localhost systemd[153820]: Stopped Create User's Volatile Files and Directories.
Nov 23 04:20:13 localhost systemd[153820]: Removed slice User Application Slice.
Nov 23 04:20:13 localhost systemd[153820]: Reached target Shutdown.
Nov 23 04:20:13 localhost systemd[153820]: Finished Exit the Session.
Nov 23 04:20:13 localhost systemd[153820]: Reached target Exit the Session.
Nov 23 04:20:13 localhost systemd[1]: user@0.service: Deactivated successfully.
Nov 23 04:20:13 localhost systemd[1]: Stopped User Manager for UID 0.
Nov 23 04:20:13 localhost systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 23 04:20:13 localhost systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 23 04:20:13 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 23 04:20:13 localhost systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 23 04:20:13 localhost systemd[1]: Removed slice User Slice of UID 0.
Nov 23 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33435 DF PROTO=TCP SPT=52904 DPT=9101 SEQ=161980028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587DE990000000001030307) 
Nov 23 04:20:14 localhost python3.9[154283]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:20:15 localhost python3.9[154379]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:16 localhost python3.9[154471]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:16 localhost python3.9[154563]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33436 DF PROTO=TCP SPT=52904 DPT=9101 SEQ=161980028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587EE590000000001030307) 
Nov 23 04:20:17 localhost python3.9[154655]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:18 localhost python3.9[154747]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:18 localhost python3.9[154837]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:20:19 localhost python3.9[154929]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 23 04:20:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24952 DF PROTO=TCP SPT=53188 DPT=9100 SEQ=2130164648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7587F8D70000000001030307) 
Nov 23 04:20:20 localhost python3.9[155019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:21 localhost python3.9[155092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889620.015087-220-216773623687567/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:21 localhost python3.9[155183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:22 localhost python3.9[155256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889621.446901-266-67988798711773/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24954 DF PROTO=TCP SPT=53188 DPT=9100 SEQ=2130164648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758804D90000000001030307) 
Nov 23 04:20:23 localhost python3.9[155348]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:20:24 localhost python3.9[155402]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:20:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23635 DF PROTO=TCP SPT=46768 DPT=9105 SEQ=2465583193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75880D590000000001030307) 
Nov 23 04:20:29 localhost python3.9[155571]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:20:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23636 DF PROTO=TCP SPT=46768 DPT=9105 SEQ=2465583193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75881D190000000001030307) 
Nov 23 04:20:29 localhost python3.9[155664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:30 localhost python3.9[155735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889629.4435635-376-118802123802264/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:31 localhost python3.9[155825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:31 localhost python3.9[155896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889630.6292946-376-175153584593815/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:33 localhost python3.9[155986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:33 localhost python3.9[156057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889632.6806228-508-13982865896782/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:20:34 localhost systemd[1]: tmp-crun.PYAq2F.mount: Deactivated successfully.
Nov 23 04:20:34 localhost podman[156148]: 2025-11-23 09:20:34.191061626 +0000 UTC m=+0.096486980 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 23 04:20:34 localhost ovn_controller[153786]: 2025-11-23T09:20:34Z|00023|memory|INFO|13008 kB peak resident set size after 30.9 seconds
Nov 23 04:20:34 localhost ovn_controller[153786]: 2025-11-23T09:20:34Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3
Nov 23 04:20:34 localhost python3.9[156147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:34 localhost podman[156148]: 2025-11-23 09:20:34.260992785 +0000 UTC m=+0.166418169 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:20:34 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:20:34 localhost python3.9[156243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889633.7751367-508-227793242515230/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:35 localhost python3.9[156333]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:20:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24956 DF PROTO=TCP SPT=53188 DPT=9100 SEQ=2130164648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758834D90000000001030307) 
Nov 23 04:20:36 localhost python3.9[156427]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:36 localhost python3.9[156519]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:37 localhost python3.9[156567]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23637 DF PROTO=TCP SPT=46768 DPT=9105 SEQ=2465583193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75883CD90000000001030307) 
Nov 23 04:20:37 localhost python3.9[156659]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46464 DF PROTO=TCP SPT=42484 DPT=9102 SEQ=3104582784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758840DA0000000001030307) 
Nov 23 04:20:38 localhost python3.9[156707]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:39 localhost python3.9[156799]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:39 localhost python3.9[156891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:40 localhost python3.9[156939]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42736 DF PROTO=TCP SPT=47810 DPT=9882 SEQ=1817503210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75884B6C0000000001030307) 
Nov 23 04:20:41 localhost python3.9[157031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:41 localhost python3.9[157079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:42 localhost python3.9[157171]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:20:42 localhost systemd[1]: Reloading.
Nov 23 04:20:42 localhost systemd-rc-local-generator[157196]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:20:42 localhost systemd-sysv-generator[157200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:20:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28824 DF PROTO=TCP SPT=55522 DPT=9101 SEQ=2782490582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758853D90000000001030307) 
Nov 23 04:20:43 localhost python3.9[157301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:44 localhost python3.9[157349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:44 localhost python3.9[157441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:46 localhost python3.9[157489]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:47 localhost python3.9[157581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:20:47 localhost systemd[1]: Reloading.
Nov 23 04:20:47 localhost systemd-rc-local-generator[157605]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:20:47 localhost systemd-sysv-generator[157609]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:20:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:20:47 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28825 DF PROTO=TCP SPT=55522 DPT=9101 SEQ=2782490582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758863990000000001030307) 
Nov 23 04:20:47 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:20:47 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:20:47 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:20:48 localhost python3.9[157715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:49 localhost python3.9[157807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:49 localhost python3.9[157880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763889648.5751207-962-47631030686703/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16319 DF PROTO=TCP SPT=53446 DPT=9100 SEQ=3028908859 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75886E070000000001030307) 
Nov 23 04:20:50 localhost python3.9[157972]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:20:51 localhost python3.9[158064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:20:52 localhost python3.9[158139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889650.766936-1036-178156295059878/.source.json _original_basename=.78mb04k_ follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:52 localhost python3.9[158231]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16321 DF PROTO=TCP SPT=53446 DPT=9100 SEQ=3028908859 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75887A1A0000000001030307) 
Nov 23 04:20:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33190 DF PROTO=TCP SPT=36312 DPT=9105 SEQ=3748246070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758882590000000001030307) 
Nov 23 04:20:55 localhost python3.9[158488]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 23 04:20:56 localhost python3.9[158580]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:20:57 localhost python3.9[158672]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:20:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33191 DF PROTO=TCP SPT=36312 DPT=9105 SEQ=3748246070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758892190000000001030307) 
Nov 23 04:21:01 localhost python3[158791]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:21:01 localhost python3[158791]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9",#012          "Digest": "sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:31:40.431364621Z",#012          "Config": {#012               "User": "neutron",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 784198911,#012          "VirtualSize": 784198911,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc/diff:/var/lib/containers/storage/overlay/cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012                    "sha256:03228f16e908b0892695bcc077f4378f9669ff86bd51a3747df5ce9269c56477",#012                    "sha256:1bc9c5b4c351caaeaa6b900805b43669e78b079f06d9048393517dd05690b8dc",#012                    "sha256:83d6638c009d9ced6da21e0f659e23221a9a8d7c283582e370f21a7551100a49"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "neutron",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf 
Nov 23 04:21:01 localhost podman[158838]: 2025-11-23 09:21:01.902607766 +0000 UTC m=+0.097360984 container remove 21e88f8333adad85d467fe94986ea0a0841df7eb9fcdbbc7e819d16903147168 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '8ff67c95922a0236a1e9ce0694abb49c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4)
Nov 23 04:21:01 localhost python3[158791]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Nov 23 04:21:02 localhost podman[158854]: 
Nov 23 04:21:02 localhost podman[158854]: 2025-11-23 09:21:02.011199112 +0000 UTC m=+0.085994166 container create 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 04:21:02 localhost podman[158854]: 2025-11-23 09:21:01.970826582 +0000 UTC m=+0.045621676 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 04:21:02 localhost python3[158791]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 04:21:02 localhost python3.9[158982]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:21:04 localhost python3.9[159076]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:21:04 localhost podman[159122]: 2025-11-23 09:21:04.405682617 +0000 UTC m=+0.089456584 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:21:04 localhost podman[159122]: 2025-11-23 09:21:04.451918525 +0000 UTC m=+0.135692482 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:21:04 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:21:04 localhost python3.9[159123]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:21:05 localhost python3.9[159239]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763889664.5830214-1300-170418558759138/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16323 DF PROTO=TCP SPT=53446 DPT=9100 SEQ=3028908859 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588AAD90000000001030307) 
Nov 23 04:21:06 localhost python3.9[159285]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:21:06 localhost systemd[1]: Reloading.
Nov 23 04:21:06 localhost systemd-rc-local-generator[159307]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:06 localhost systemd-sysv-generator[159312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:07 localhost python3.9[159367]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:07 localhost systemd[1]: Reloading.
Nov 23 04:21:07 localhost systemd-rc-local-generator[159396]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:07 localhost systemd-sysv-generator[159399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:07 localhost systemd[1]: Starting ovn_metadata_agent container...
Nov 23 04:21:07 localhost systemd[1]: tmp-crun.rtEmJ9.mount: Deactivated successfully.
Nov 23 04:21:07 localhost systemd[1]: Started libcrun container.
Nov 23 04:21:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3040f056d333b514e5747789be7898d41756a2f06f7ccefe7536421a4a2b4ec3/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 04:21:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3040f056d333b514e5747789be7898d41756a2f06f7ccefe7536421a4a2b4ec3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:21:07 localhost podman[159408]: 2025-11-23 09:21:07.579423738 +0000 UTC m=+0.161335842 container init 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + sudo -E kolla_set_configs
Nov 23 04:21:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:21:07 localhost podman[159408]: 2025-11-23 09:21:07.612418991 +0000 UTC m=+0.194331095 container start 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:21:07 localhost edpm-start-podman-container[159408]: ovn_metadata_agent
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Validating config file
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Copying service configuration files
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Writing out command to execute
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: ++ cat /run_command
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + CMD=neutron-ovn-metadata-agent
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + ARGS=
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + sudo kolla_copy_cacerts
Nov 23 04:21:07 localhost edpm-start-podman-container[159407]: Creating additional drop-in dependency for "ovn_metadata_agent" (8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346)
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + [[ ! -n '' ]]
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + . kolla_extend_start
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: Running command: 'neutron-ovn-metadata-agent'
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + umask 0022
Nov 23 04:21:07 localhost ovn_metadata_agent[159423]: + exec neutron-ovn-metadata-agent
Nov 23 04:21:07 localhost systemd[1]: Reloading.
Nov 23 04:21:07 localhost podman[159431]: 2025-11-23 09:21:07.745126948 +0000 UTC m=+0.127682431 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33192 DF PROTO=TCP SPT=36312 DPT=9105 SEQ=3748246070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588B2DA0000000001030307) 
Nov 23 04:21:07 localhost podman[159431]: 2025-11-23 09:21:07.77605871 +0000 UTC m=+0.158614163 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:21:07 localhost systemd-rc-local-generator[159499]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:07 localhost systemd-sysv-generator[159504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:07 localhost systemd[1]: tmp-crun.QD5Wqc.mount: Deactivated successfully.
Nov 23 04:21:07 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:21:07 localhost systemd[1]: Started ovn_metadata_agent container.
Nov 23 04:21:08 localhost systemd[1]: session-51.scope: Deactivated successfully.
Nov 23 04:21:08 localhost systemd[1]: session-51.scope: Consumed 32.766s CPU time.
Nov 23 04:21:08 localhost systemd-logind[761]: Session 51 logged out. Waiting for processes to exit.
Nov 23 04:21:08 localhost systemd-logind[761]: Removed session 51.
Nov 23 04:21:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8177 DF PROTO=TCP SPT=41486 DPT=9102 SEQ=2560510567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588B6D90000000001030307) 
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.178 159429 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.179 159429 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.179 159429 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.179 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.179 159429 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.179 159429 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.180 159429 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.180 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.180 159429 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.180 159429 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.181 159429 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.182 159429 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.183 159429 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.184 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.185 159429 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.186 159429 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.187 159429 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.188 159429 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.189 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.190 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.191 159429 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.192 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost sshd[159528]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.193 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.194 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.195 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.196 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.197 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.198 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.199 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.200 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.201 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.202 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.203 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.204 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.205 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.206 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.207 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.208 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.209 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.210 159429 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.230 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.231 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.231 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.231 159429 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.232 159429 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.257 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9b5ed7a7-8af8-41a0-a5ff-546625cecbf9 (UUID: 9b5ed7a7-8af8-41a0-a5ff-546625cecbf9) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.282 159429 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.282 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.282 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.283 159429 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.285 159429 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.289 159429 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.302 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9b5ed7a7-8af8-41a0-a5ff-546625cecbf9'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], external_ids={'neutron:ovn-metadata-id': '6a9f8a51-9d64-5516-8d40-6065c82054c5', 'neutron:ovn-metadata-sb-cfg': '1'}, name=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, nb_cfg_timestamp=1763889611848, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.303 159429 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd3e8e33b50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.304 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.305 159429 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.305 159429 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.305 159429 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.309 159429 DEBUG oslo_service.service [-] Started child 159530 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.313 159429 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpn7a2lvhk/privsep.sock']#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.313 159530 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2011037'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.334 159530 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.335 159530 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.335 159530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.338 159530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.341 159530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.354 159530 INFO eventlet.wsgi.server [-] (159530) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.972 159429 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.973 159429 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpn7a2lvhk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.847 159535 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.853 159535 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.856 159535 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.857 159535 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159535#033[00m
Nov 23 04:21:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:09.977 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa66d0f-9838-4edc-80ee-81b5e71a1dba]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:21:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31083 DF PROTO=TCP SPT=37084 DPT=9101 SEQ=920296912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588BCE60000000001030307) 
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.451 159535 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.451 159535 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.451 159535 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.903 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[5673fe94-6b0e-42fb-98f8-695b8ce3ade1]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.905 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, column=external_ids, values=({'neutron:ovn-metadata-id': '6a9f8a51-9d64-5516-8d40-6065c82054c5'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.906 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.906 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.952 159429 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.952 159429 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.953 159429 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.953 159429 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.953 159429 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.953 159429 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.954 159429 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.954 159429 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.954 159429 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.955 159429 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.955 159429 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.955 159429 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.956 159429 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.956 159429 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.956 159429 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.957 159429 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.957 159429 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.957 159429 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.957 159429 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.958 159429 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.958 159429 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.958 159429 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.959 159429 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.959 159429 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.959 159429 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.960 159429 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.960 159429 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.960 159429 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.961 159429 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.961 159429 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.961 159429 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.962 159429 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.962 159429 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.962 159429 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.962 159429 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.963 159429 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.963 159429 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.964 159429 DEBUG oslo_service.service [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.964 159429 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.964 159429 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.964 159429 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.965 159429 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.965 159429 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.965 159429 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.966 159429 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.966 159429 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.966 159429 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.966 159429 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.967 159429 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.967 159429 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.967 159429 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.968 159429 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.968 159429 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.968 159429 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.968 159429 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.969 159429 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.969 159429 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.969 159429 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.970 159429 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.970 159429 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.970 159429 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.970 159429 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.971 159429 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.971 159429 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.971 159429 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.972 159429 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.972 159429 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.972 159429 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.973 159429 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.973 159429 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.973 159429 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.974 159429 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.974 159429 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.974 159429 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.974 159429 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.975 159429 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.975 159429 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.975 159429 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.976 159429 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.976 159429 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.976 159429 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.977 159429 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.977 159429 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.977 159429 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.977 159429 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.978 159429 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.978 159429 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.978 159429 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.978 159429 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.979 159429 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.979 159429 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.980 159429 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.980 159429 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.980 159429 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.981 159429 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.981 159429 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.982 159429 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.982 159429 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.982 159429 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.983 159429 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.983 159429 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.984 159429 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.984 159429 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.984 159429 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.985 159429 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.985 159429 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.986 159429 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.986 159429 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.986 159429 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.987 159429 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.987 159429 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.988 159429 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.988 159429 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.989 159429 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.989 159429 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.989 159429 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.990 159429 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.990 159429 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.991 159429 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.991 159429 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.992 159429 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.992 159429 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.992 159429 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.993 159429 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.993 159429 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.994 159429 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.994 159429 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.994 159429 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.995 159429 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.995 159429 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.996 159429 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.996 159429 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.997 159429 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.997 159429 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.997 159429 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.998 159429 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.998 159429 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.999 159429 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:10 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.999 159429 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:10.999 159429 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.000 159429 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.000 159429 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.000 159429 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.000 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.001 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.001 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.001 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.001 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.002 159429 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.002 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.002 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.002 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.003 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.003 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.003 159429 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.003 159429 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.004 159429 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.004 159429 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.004 159429 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.004 159429 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.005 159429 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.005 159429 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.005 159429 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.005 159429 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.006 159429 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.006 159429 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.006 159429 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.006 159429 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.007 159429 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.007 159429 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.007 159429 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.008 159429 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.008 159429 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.008 159429 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.008 159429 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.009 159429 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.009 159429 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.009 159429 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.009 159429 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.010 159429 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.010 159429 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.010 159429 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.010 159429 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.011 159429 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.011 159429 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.011 159429 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.012 159429 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.012 159429 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.012 159429 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.012 159429 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.013 159429 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.013 159429 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.013 159429 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.013 159429 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.014 159429 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.014 159429 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.014 159429 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.014 159429 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.015 159429 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.015 159429 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.015 159429 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.015 159429 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.016 159429 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.016 159429 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.016 159429 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.016 159429 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.017 159429 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.017 159429 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.017 159429 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.017 159429 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.018 159429 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.018 159429 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.018 159429 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.018 159429 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.019 159429 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.019 159429 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.019 159429 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.019 159429 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.020 159429 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.020 159429 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.020 159429 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.020 159429 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.021 159429 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.021 159429 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.021 159429 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.021 159429 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.022 159429 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.022 159429 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.022 159429 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.022 159429 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.023 159429 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.023 159429 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.023 159429 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.024 159429 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.024 159429 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.024 159429 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.024 159429 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.025 159429 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.025 159429 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.025 159429 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.025 159429 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.026 159429 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.026 159429 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.026 159429 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.026 159429 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.027 159429 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.027 159429 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.027 159429 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.027 159429 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.028 159429 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.028 159429 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.028 159429 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.029 159429 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.029 159429 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.029 159429 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.029 159429 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.030 159429 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.030 159429 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.030 159429 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.030 159429 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.031 159429 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.031 159429 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.031 159429 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.031 159429 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.032 159429 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.032 159429 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.032 159429 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.032 159429 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.033 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.033 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.033 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.033 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.034 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.034 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.034 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.035 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.035 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.035 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.035 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.035 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.036 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.036 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.036 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.036 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.036 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.037 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.037 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.037 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.037 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.037 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.038 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.039 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.039 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.039 159429 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.039 159429 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.040 159429 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.040 159429 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.040 159429 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:21:11 localhost ovn_metadata_agent[159423]: 2025-11-23 09:21:11.040 159429 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31085 DF PROTO=TCP SPT=37084 DPT=9101 SEQ=920296912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588C8D90000000001030307) 
Nov 23 04:21:13 localhost sshd[159540]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:21:13 localhost systemd-logind[761]: New session 52 of user zuul.
Nov 23 04:21:13 localhost systemd[1]: Started Session 52 of User zuul.
Nov 23 04:21:15 localhost python3.9[159633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:21:17 localhost python3.9[159729]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31086 DF PROTO=TCP SPT=37084 DPT=9101 SEQ=920296912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588D8990000000001030307) 
Nov 23 04:21:19 localhost sshd[159835]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:21:19 localhost python3.9[159834]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:19 localhost systemd[1]: libpod-ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945.scope: Deactivated successfully.
Nov 23 04:21:19 localhost podman[159837]: 2025-11-23 09:21:19.130979256 +0000 UTC m=+0.076153987 container died ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Nov 23 04:21:19 localhost podman[159837]: 2025-11-23 09:21:19.175931792 +0000 UTC m=+0.121106493 container cleanup ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:21:19 localhost podman[159850]: 2025-11-23 09:21:19.207241724 +0000 UTC m=+0.068383250 container remove ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public)
Nov 23 04:21:19 localhost systemd[1]: libpod-conmon-ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945.scope: Deactivated successfully.
Nov 23 04:21:20 localhost systemd[1]: var-lib-containers-storage-overlay-ef10dbd8e654e3d33969e8a1d0f6410664da7d48e949f4169105b8cba2a78b06-merged.mount: Deactivated successfully.
Nov 23 04:21:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff8a2d4bd558f066c11b86416facbf88e314604442f27adbfbe9326be74f3945-userdata-shm.mount: Deactivated successfully.
Nov 23 04:21:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39931 DF PROTO=TCP SPT=42334 DPT=9100 SEQ=1131022690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588E3360000000001030307) 
Nov 23 04:21:20 localhost python3.9[159961]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:21:20 localhost systemd[1]: Reloading.
Nov 23 04:21:20 localhost systemd-sysv-generator[159989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:20 localhost systemd-rc-local-generator[159985]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:21 localhost python3.9[160087]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:21:21 localhost network[160104]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:21:21 localhost network[160105]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:21:21 localhost network[160106]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:21:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39933 DF PROTO=TCP SPT=42334 DPT=9100 SEQ=1131022690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588EF590000000001030307) 
Nov 23 04:21:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16438 DF PROTO=TCP SPT=57966 DPT=9105 SEQ=1152501952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7588F7990000000001030307) 
Nov 23 04:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16439 DF PROTO=TCP SPT=57966 DPT=9105 SEQ=1152501952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758907590000000001030307) 
Nov 23 04:21:29 localhost python3.9[160386]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:29 localhost systemd[1]: Reloading.
Nov 23 04:21:30 localhost systemd-rc-local-generator[160413]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:30 localhost systemd-sysv-generator[160418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:30 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target.
Nov 23 04:21:31 localhost python3.9[160518]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:32 localhost python3.9[160611]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:33 localhost python3.9[160704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:34 localhost python3.9[160797]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:21:34 localhost systemd[1]: tmp-crun.5vsZj1.mount: Deactivated successfully.
Nov 23 04:21:34 localhost podman[160891]: 2025-11-23 09:21:34.626269152 +0000 UTC m=+0.099259951 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:21:34 localhost podman[160891]: 2025-11-23 09:21:34.669417673 +0000 UTC m=+0.142408492 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_controller, config_id=ovn_controller)
Nov 23 04:21:34 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:21:34 localhost python3.9[160890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39935 DF PROTO=TCP SPT=42334 DPT=9100 SEQ=1131022690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75891ED90000000001030307) 
Nov 23 04:21:35 localhost python3.9[161008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16440 DF PROTO=TCP SPT=57966 DPT=9105 SEQ=1152501952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758926D90000000001030307) 
Nov 23 04:21:37 localhost python3.9[161101]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:21:38 localhost podman[161161]: 2025-11-23 09:21:38.187168316 +0000 UTC m=+0.088134233 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 23 04:21:38 localhost podman[161161]: 2025-11-23 09:21:38.217176037 +0000 UTC m=+0.118141994 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:21:38 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:21:38 localhost python3.9[161209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54215 DF PROTO=TCP SPT=51534 DPT=9102 SEQ=187873907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75892ADA0000000001030307) 
Nov 23 04:21:39 localhost python3.9[161301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:39 localhost python3.9[161393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:40 localhost python3.9[161485]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:40 localhost python3.9[161577]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37649 DF PROTO=TCP SPT=43134 DPT=9882 SEQ=1496518556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758935CC0000000001030307) 
Nov 23 04:21:41 localhost python3.9[161669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:42 localhost python3.9[161761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:42 localhost python3.9[161853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:43 localhost python3.9[161945]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44365 DF PROTO=TCP SPT=53516 DPT=9101 SEQ=1296692197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75893E190000000001030307) 
Nov 23 04:21:44 localhost python3.9[162037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:44 localhost python3.9[162129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:45 localhost python3.9[162221]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:46 localhost python3.9[162313]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44366 DF PROTO=TCP SPT=53516 DPT=9101 SEQ=1296692197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75894DD90000000001030307) 
Nov 23 04:21:47 localhost python3.9[162405]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:48 localhost python3.9[162497]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:21:49 localhost python3.9[162589]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:21:49 localhost systemd[1]: Reloading.
Nov 23 04:21:49 localhost systemd-rc-local-generator[162612]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:21:49 localhost systemd-sysv-generator[162619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:21:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:21:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64456 DF PROTO=TCP SPT=44606 DPT=9100 SEQ=1258928478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758958670000000001030307) 
Nov 23 04:21:50 localhost python3.9[162716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:51 localhost python3.9[162809]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:51 localhost python3.9[162902]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:52 localhost python3.9[162995]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64458 DF PROTO=TCP SPT=44606 DPT=9100 SEQ=1258928478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758964590000000001030307) 
Nov 23 04:21:53 localhost python3.9[163088]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:54 localhost python3.9[163181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:55 localhost python3.9[163274]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:21:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42740 DF PROTO=TCP SPT=56476 DPT=9105 SEQ=1419309558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75896CD90000000001030307) 
Nov 23 04:21:57 localhost python3.9[163367]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 23 04:21:58 localhost python3.9[163460]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 04:21:58 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation.
Nov 23 04:21:58 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:21:58 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:21:58 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:21:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42741 DF PROTO=TCP SPT=56476 DPT=9105 SEQ=1419309558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75897C990000000001030307) 
Nov 23 04:21:59 localhost python3.9[163559]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 04:22:01 localhost python3.9[163659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:22:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:22:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5606e91182d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 23 04:22:01 localhost python3.9[163713]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:22:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:22:05 localhost podman[163716]: 2025-11-23 09:22:05.179295961 +0000 UTC m=+0.084104635 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:22:05 localhost podman[163716]: 2025-11-23 09:22:05.251077108 +0000 UTC m=+0.155885772 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 04:22:05 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:22:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64460 DF PROTO=TCP SPT=44606 DPT=9100 SEQ=1258928478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758994D90000000001030307) 
Nov 23 04:22:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:22:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55720e44a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Nov 23 04:22:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42742 DF PROTO=TCP SPT=56476 DPT=9105 SEQ=1419309558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75899CD90000000001030307) 
Nov 23 04:22:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53079 DF PROTO=TCP SPT=53594 DPT=9102 SEQ=940463951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589A0DA0000000001030307) 
Nov 23 04:22:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:22:09 localhost podman[163807]: 2025-11-23 09:22:09.174132367 +0000 UTC m=+0.074475759 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:22:09 localhost podman[163807]: 2025-11-23 09:22:09.183783894 +0000 UTC m=+0.084127296 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:22:09 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:22:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:22:09.217 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:22:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:22:09.218 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:22:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:22:09.218 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:22:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25316 DF PROTO=TCP SPT=48354 DPT=9101 SEQ=2605461707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589A7460000000001030307) 
Nov 23 04:22:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25318 DF PROTO=TCP SPT=48354 DPT=9101 SEQ=2605461707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589B3590000000001030307) 
Nov 23 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25319 DF PROTO=TCP SPT=48354 DPT=9101 SEQ=2605461707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589C3190000000001030307) 
Nov 23 04:22:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30301 DF PROTO=TCP SPT=51650 DPT=9100 SEQ=1588353837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589CD960000000001030307) 
Nov 23 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30303 DF PROTO=TCP SPT=51650 DPT=9100 SEQ=1588353837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589D9990000000001030307) 
Nov 23 04:22:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15396 DF PROTO=TCP SPT=47370 DPT=9105 SEQ=2624476045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589E2190000000001030307) 
Nov 23 04:22:26 localhost kernel: SELinux:  Converting 2746 SID table entries...
Nov 23 04:22:26 localhost kernel: SELinux:  Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped).
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:22:27 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:22:28 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=19 res=1
Nov 23 04:22:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15397 DF PROTO=TCP SPT=47370 DPT=9105 SEQ=2624476045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7589F1D90000000001030307) 
Nov 23 04:22:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30305 DF PROTO=TCP SPT=51650 DPT=9100 SEQ=1588353837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A08D90000000001030307) 
Nov 23 04:22:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:22:36 localhost systemd[1]: tmp-crun.FZquph.mount: Deactivated successfully.
Nov 23 04:22:36 localhost podman[164922]: 2025-11-23 09:22:36.190459351 +0000 UTC m=+0.092937754 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:22:36 localhost podman[164922]: 2025-11-23 09:22:36.228637793 +0000 UTC m=+0.131116216 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 23 04:22:36 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:22:37 localhost kernel: SELinux:  Converting 2749 SID table entries...
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:22:37 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15398 DF PROTO=TCP SPT=47370 DPT=9105 SEQ=2624476045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A12DA0000000001030307) 
Nov 23 04:22:38 localhost sshd[164954]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:22:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9360 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=2390309091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A16D90000000001030307) 
Nov 23 04:22:39 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=20 res=1
Nov 23 04:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:22:39 localhost systemd[1]: tmp-crun.65B0Ia.mount: Deactivated successfully.
Nov 23 04:22:39 localhost podman[164956]: 2025-11-23 09:22:39.635390644 +0000 UTC m=+0.092677427 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 23 04:22:39 localhost podman[164956]: 2025-11-23 09:22:39.645942344 +0000 UTC m=+0.103229167 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:22:39 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:22:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9368 DF PROTO=TCP SPT=40338 DPT=9882 SEQ=2901673713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A202D0000000001030307) 
Nov 23 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17094 DF PROTO=TCP SPT=42494 DPT=9101 SEQ=860788109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A28990000000001030307) 
Nov 23 04:22:43 localhost sshd[164974]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:22:45 localhost kernel: SELinux:  Converting 2749 SID table entries...
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:22:45 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17095 DF PROTO=TCP SPT=42494 DPT=9101 SEQ=860788109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A38590000000001030307) 
Nov 23 04:22:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39854 DF PROTO=TCP SPT=56306 DPT=9100 SEQ=3779524303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A42C70000000001030307) 
Nov 23 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39856 DF PROTO=TCP SPT=56306 DPT=9100 SEQ=3779524303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A4EDA0000000001030307) 
Nov 23 04:22:53 localhost kernel: SELinux:  Converting 2749 SID table entries...
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:22:53 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:22:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2536 DF PROTO=TCP SPT=58840 DPT=9105 SEQ=608258218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A57190000000001030307) 
Nov 23 04:22:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2537 DF PROTO=TCP SPT=58840 DPT=9105 SEQ=608258218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A66DA0000000001030307) 
Nov 23 04:23:03 localhost kernel: SELinux:  Converting 2749 SID table entries...
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:23:03 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:23:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39858 DF PROTO=TCP SPT=56306 DPT=9100 SEQ=3779524303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A7ED90000000001030307) 
Nov 23 04:23:07 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=23 res=1
Nov 23 04:23:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:23:07 localhost systemd[1]: tmp-crun.MUgxkJ.mount: Deactivated successfully.
Nov 23 04:23:07 localhost podman[165006]: 2025-11-23 09:23:07.201020536 +0000 UTC m=+0.096159294 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 04:23:07 localhost podman[165006]: 2025-11-23 09:23:07.242933001 +0000 UTC m=+0.138071789 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:23:07 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2538 DF PROTO=TCP SPT=58840 DPT=9105 SEQ=608258218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A86D90000000001030307) 
Nov 23 04:23:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29469 DF PROTO=TCP SPT=43646 DPT=9102 SEQ=4254438050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A8AD90000000001030307) 
Nov 23 04:23:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:23:09.218 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:23:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:23:09.218 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:23:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:23:09.219 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:23:10 localhost systemd[1]: tmp-crun.5FpCfA.mount: Deactivated successfully.
Nov 23 04:23:10 localhost podman[165031]: 2025-11-23 09:23:10.189910482 +0000 UTC m=+0.094095047 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 23 04:23:10 localhost podman[165031]: 2025-11-23 09:23:10.197878221 +0000 UTC m=+0.102062776 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 04:23:10 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:23:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40239 DF PROTO=TCP SPT=57206 DPT=9882 SEQ=3024426508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A955D0000000001030307) 
Nov 23 04:23:11 localhost kernel: SELinux:  Converting 2749 SID table entries...
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:23:11 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:23:12 localhost systemd[1]: Reloading.
Nov 23 04:23:12 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=24 res=1
Nov 23 04:23:12 localhost systemd-sysv-generator[165081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:23:12 localhost systemd-rc-local-generator[165078]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:23:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:23:12 localhost systemd[1]: Reloading.
Nov 23 04:23:12 localhost systemd-rc-local-generator[165117]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:23:12 localhost systemd-sysv-generator[165124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:23:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5101 DF PROTO=TCP SPT=49372 DPT=9101 SEQ=135888026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758A9D990000000001030307) 
Nov 23 04:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5102 DF PROTO=TCP SPT=49372 DPT=9101 SEQ=135888026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758AAD590000000001030307) 
Nov 23 04:23:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48601 DF PROTO=TCP SPT=37748 DPT=9100 SEQ=792761598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758AB7F60000000001030307) 
Nov 23 04:23:21 localhost kernel: SELinux:  Converting 2750 SID table entries...
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability open_perms=1
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 23 04:23:21 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 23 04:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48603 DF PROTO=TCP SPT=37748 DPT=9100 SEQ=792761598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758AC4190000000001030307) 
Nov 23 04:23:25 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload.
Nov 23 04:23:25 localhost dbus-broker-launch[756]: avc:  op=load_policy lsm=selinux seqno=25 res=1
Nov 23 04:23:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53905 DF PROTO=TCP SPT=60088 DPT=9105 SEQ=3369907538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758ACC590000000001030307) 
Nov 23 04:23:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53906 DF PROTO=TCP SPT=60088 DPT=9105 SEQ=3369907538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758ADC190000000001030307) 
Nov 23 04:23:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48605 DF PROTO=TCP SPT=37748 DPT=9100 SEQ=792761598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758AF4D90000000001030307) 
Nov 23 04:23:36 localhost sshd[165462]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53907 DF PROTO=TCP SPT=60088 DPT=9105 SEQ=3369907538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758AFCDA0000000001030307) 
Nov 23 04:23:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:23:38 localhost podman[165464]: 2025-11-23 09:23:38.201906802 +0000 UTC m=+0.095950727 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:23:38 localhost systemd[1]: tmp-crun.3473st.mount: Deactivated successfully.
Nov 23 04:23:38 localhost podman[165464]: 2025-11-23 09:23:38.280765266 +0000 UTC m=+0.174809161 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:23:38 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:23:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8354 DF PROTO=TCP SPT=50780 DPT=9102 SEQ=736887563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B00DA0000000001030307) 
Nov 23 04:23:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52732 DF PROTO=TCP SPT=42880 DPT=9101 SEQ=645913838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B07F70000000001030307) 
Nov 23 04:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:23:41 localhost podman[167223]: 2025-11-23 09:23:41.16660895 +0000 UTC m=+0.072518312 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:23:41 localhost podman[167223]: 2025-11-23 09:23:41.1999519 +0000 UTC m=+0.105861292 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:23:41 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52734 DF PROTO=TCP SPT=42880 DPT=9101 SEQ=645913838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B14190000000001030307) 
Nov 23 04:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52735 DF PROTO=TCP SPT=42880 DPT=9101 SEQ=645913838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B23D90000000001030307) 
Nov 23 04:23:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23080 DF PROTO=TCP SPT=58118 DPT=9100 SEQ=1618550308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B2D270000000001030307) 
Nov 23 04:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23082 DF PROTO=TCP SPT=58118 DPT=9100 SEQ=1618550308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B39190000000001030307) 
Nov 23 04:23:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4704 DF PROTO=TCP SPT=39904 DPT=9105 SEQ=1516305123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B419A0000000001030307) 
Nov 23 04:23:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4705 DF PROTO=TCP SPT=39904 DPT=9105 SEQ=1516305123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B51590000000001030307) 
Nov 23 04:24:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23084 DF PROTO=TCP SPT=58118 DPT=9100 SEQ=1618550308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B68DA0000000001030307) 
Nov 23 04:24:05 localhost systemd[1]: Stopping OpenSSH server daemon...
Nov 23 04:24:05 localhost systemd[1]: sshd.service: Deactivated successfully.
Nov 23 04:24:05 localhost systemd[1]: Stopped OpenSSH server daemon.
Nov 23 04:24:05 localhost systemd[1]: sshd.service: Consumed 2.216s CPU time, read 32.0K from disk, written 0B to disk.
Nov 23 04:24:05 localhost systemd[1]: Stopped target sshd-keygen.target.
Nov 23 04:24:05 localhost systemd[1]: Stopping sshd-keygen.target...
Nov 23 04:24:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:24:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:24:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 23 04:24:05 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 23 04:24:05 localhost systemd[1]: Starting OpenSSH server daemon...
Nov 23 04:24:05 localhost sshd[183198]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:24:05 localhost systemd[1]: Started OpenSSH server daemon.
Nov 23 04:24:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:05 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:06 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4706 DF PROTO=TCP SPT=39904 DPT=9105 SEQ=1516305123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B70D90000000001030307) 
Nov 23 04:24:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 04:24:07 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 04:24:07 localhost systemd[1]: Reloading.
Nov 23 04:24:07 localhost systemd-rc-local-generator[183431]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:07 localhost systemd-sysv-generator[183434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:07 localhost sshd[183665]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:24:08 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 04:24:08 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 04:24:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32911 DF PROTO=TCP SPT=60204 DPT=9102 SEQ=2160861804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B74DA0000000001030307) 
Nov 23 04:24:08 localhost sshd[185189]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:24:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:24:09 localhost systemd[1]: tmp-crun.AkqKvG.mount: Deactivated successfully.
Nov 23 04:24:09 localhost podman[185243]: 2025-11-23 09:24:09.193315247 +0000 UTC m=+0.428674628 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 23 04:24:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:24:09.219 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:24:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:24:09.220 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:24:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:24:09.220 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:24:09 localhost podman[185243]: 2025-11-23 09:24:09.573444834 +0000 UTC m=+0.808804225 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:24:09 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:24:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=258 DF PROTO=TCP SPT=59688 DPT=9882 SEQ=2194463064 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B7FBD0000000001030307) 
Nov 23 04:24:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:24:11 localhost podman[188037]: 2025-11-23 09:24:11.411512226 +0000 UTC m=+0.071282008 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 04:24:11 localhost podman[188037]: 2025-11-23 09:24:11.444963258 +0000 UTC m=+0.104733030 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 23 04:24:11 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:24:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22113 DF PROTO=TCP SPT=41286 DPT=9101 SEQ=663299455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B881A0000000001030307) 
Nov 23 04:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22114 DF PROTO=TCP SPT=41286 DPT=9101 SEQ=663299455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758B97DA0000000001030307) 
Nov 23 04:24:19 localhost sshd[192223]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:24:19 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 04:24:19 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 04:24:19 localhost systemd[1]: man-db-cache-update.service: Consumed 14.848s CPU time.
Nov 23 04:24:19 localhost systemd[1]: run-r181eed15f5b2482ba0cecd91fe6bffbc.service: Deactivated successfully.
Nov 23 04:24:19 localhost systemd[1]: run-r76632649e45443f09e846b9e3bbf7e63.service: Deactivated successfully.
Nov 23 04:24:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13008 DF PROTO=TCP SPT=57792 DPT=9100 SEQ=2578179345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BA2570000000001030307) 
Nov 23 04:24:21 localhost python3.9[192318]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:24:21 localhost systemd[1]: Reloading.
Nov 23 04:24:21 localhost systemd-sysv-generator[192345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:21 localhost systemd-rc-local-generator[192341]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost python3.9[192467]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:24:22 localhost systemd[1]: Reloading.
Nov 23 04:24:22 localhost systemd-sysv-generator[192498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:22 localhost systemd-rc-local-generator[192493]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13010 DF PROTO=TCP SPT=57792 DPT=9100 SEQ=2578179345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BAE5A0000000001030307) 
Nov 23 04:24:24 localhost python3.9[192615]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:24:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17060 DF PROTO=TCP SPT=40728 DPT=9105 SEQ=2583554866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BB6D90000000001030307) 
Nov 23 04:24:25 localhost systemd[1]: Reloading.
Nov 23 04:24:25 localhost systemd-rc-local-generator[192642]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:25 localhost systemd-sysv-generator[192645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:26 localhost python3.9[192764]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:24:26 localhost systemd[1]: Reloading.
Nov 23 04:24:26 localhost systemd-sysv-generator[192793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:26 localhost systemd-rc-local-generator[192786]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost python3.9[192913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:28 localhost systemd[1]: Reloading.
Nov 23 04:24:28 localhost systemd-rc-local-generator[192943]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:28 localhost systemd-sysv-generator[192947]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost python3.9[193062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:29 localhost systemd[1]: Reloading.
Nov 23 04:24:29 localhost systemd-sysv-generator[193096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:29 localhost systemd-rc-local-generator[193091]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17061 DF PROTO=TCP SPT=40728 DPT=9105 SEQ=2583554866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BC6990000000001030307) 
Nov 23 04:24:30 localhost python3.9[193210]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:30 localhost systemd[1]: Reloading.
Nov 23 04:24:30 localhost systemd-sysv-generator[193239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:30 localhost systemd-rc-local-generator[193236]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:31 localhost python3.9[193359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:32 localhost python3.9[193472]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:32 localhost systemd[1]: Reloading.
Nov 23 04:24:32 localhost systemd-rc-local-generator[193499]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:32 localhost systemd-sysv-generator[193506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost python3.9[193621]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:24:33 localhost systemd[1]: Reloading.
Nov 23 04:24:33 localhost systemd-sysv-generator[193666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:24:33 localhost systemd-rc-local-generator[193662]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:24:34 localhost python3.9[193822]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13012 DF PROTO=TCP SPT=57792 DPT=9100 SEQ=2578179345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BDED90000000001030307) 
Nov 23 04:24:36 localhost python3.9[193969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:37 localhost python3.9[194082]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17062 DF PROTO=TCP SPT=40728 DPT=9105 SEQ=2583554866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BE6DA0000000001030307) 
Nov 23 04:24:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52281 DF PROTO=TCP SPT=60042 DPT=9102 SEQ=2373411182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BEAD90000000001030307) 
Nov 23 04:24:38 localhost python3.9[194195]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:39 localhost python3.9[194308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:24:40 localhost systemd[1]: tmp-crun.2wC7mr.mount: Deactivated successfully.
Nov 23 04:24:40 localhost podman[194422]: 2025-11-23 09:24:40.145961769 +0000 UTC m=+0.097169013 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 04:24:40 localhost podman[194422]: 2025-11-23 09:24:40.186944629 +0000 UTC m=+0.138151913 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller)
Nov 23 04:24:40 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:24:40 localhost python3.9[194421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18636 DF PROTO=TCP SPT=38590 DPT=9882 SEQ=1425089700 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BF4EC0000000001030307) 
Nov 23 04:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:24:41 localhost systemd[1]: tmp-crun.axkJ4p.mount: Deactivated successfully.
Nov 23 04:24:41 localhost podman[194559]: 2025-11-23 09:24:41.972152842 +0000 UTC m=+0.087553617 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:24:41 localhost podman[194559]: 2025-11-23 09:24:41.980895893 +0000 UTC m=+0.096296658 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:24:41 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:24:42 localhost python3.9[194560]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:42 localhost python3.9[194691]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36698 DF PROTO=TCP SPT=40348 DPT=9101 SEQ=1906803143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758BFD5A0000000001030307) 
Nov 23 04:24:43 localhost python3.9[194804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:45 localhost python3.9[194917]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:47 localhost python3.9[195030]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36699 DF PROTO=TCP SPT=40348 DPT=9101 SEQ=1906803143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C0D190000000001030307) 
Nov 23 04:24:47 localhost python3.9[195143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:49 localhost python3.9[195256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27144 DF PROTO=TCP SPT=49380 DPT=9100 SEQ=3521912609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C17870000000001030307) 
Nov 23 04:24:51 localhost python3.9[195369]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 23 04:24:52 localhost python3.9[195482]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:52 localhost python3.9[195592]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27146 DF PROTO=TCP SPT=49380 DPT=9100 SEQ=3521912609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C23990000000001030307) 
Nov 23 04:24:53 localhost python3.9[195702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:54 localhost python3.9[195812]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:54 localhost python3.9[195922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:55 localhost python3.9[196032]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:24:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52634 DF PROTO=TCP SPT=53944 DPT=9105 SEQ=1534325782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C2BD90000000001030307) 
Nov 23 04:24:56 localhost python3.9[196142]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:24:56 localhost python3.9[196232]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889895.5313134-1646-84912384281878/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:24:57 localhost python3.9[196342]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:24:58 localhost python3.9[196432]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889896.9746842-1646-247876146586775/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:24:58 localhost python3.9[196542]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:24:59 localhost python3.9[196632]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889898.1930442-1646-154715396796674/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:24:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52635 DF PROTO=TCP SPT=53944 DPT=9105 SEQ=1534325782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C3B990000000001030307) 
Nov 23 04:24:59 localhost python3.9[196742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:00 localhost python3.9[196832]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889899.3234713-1646-279573815083204/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:00 localhost python3.9[196942]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:01 localhost python3.9[197032]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889900.4890494-1646-265079197143322/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:02 localhost python3.9[197142]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:03 localhost python3.9[197232]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889902.4948401-1646-249514062316726/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:04 localhost python3.9[197342]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:04 localhost python3.9[197430]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889903.6298916-1646-37835533519197/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27148 DF PROTO=TCP SPT=49380 DPT=9100 SEQ=3521912609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C52D90000000001030307) 
Nov 23 04:25:05 localhost python3.9[197540]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:06 localhost python3.9[197630]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763889905.03835-1646-81442354536227/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52636 DF PROTO=TCP SPT=53944 DPT=9105 SEQ=1534325782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C5CDA0000000001030307) 
Nov 23 04:25:07 localhost python3.9[197740]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:08 localhost python3.9[197850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54978 DF PROTO=TCP SPT=46600 DPT=9102 SEQ=3207985864 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C60DA0000000001030307) 
Nov 23 04:25:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:25:09.221 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:25:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:25:09.222 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:25:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:25:09.222 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:25:09 localhost python3.9[197960]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:09 localhost python3.9[198070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:25:10 localhost systemd[1]: tmp-crun.1CzB1Y.mount: Deactivated successfully.
Nov 23 04:25:10 localhost podman[198181]: 2025-11-23 09:25:10.5156013 +0000 UTC m=+0.087254915 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:25:10 localhost podman[198181]: 2025-11-23 09:25:10.551885132 +0000 UTC m=+0.123538747 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 23 04:25:10 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:25:10 localhost python3.9[198180]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:11 localhost python3.9[198315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51039 DF PROTO=TCP SPT=52898 DPT=9882 SEQ=1203201329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C6A1D0000000001030307) 
Nov 23 04:25:11 localhost python3.9[198425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:25:12 localhost podman[198484]: 2025-11-23 09:25:12.183991096 +0000 UTC m=+0.087600174 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 04:25:12 localhost podman[198484]: 2025-11-23 09:25:12.21414855 +0000 UTC m=+0.117757578 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:25:12 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:25:12 localhost python3.9[198553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:13 localhost python3.9[198663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32727 DF PROTO=TCP SPT=39752 DPT=9101 SEQ=2353497346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C72590000000001030307) 
Nov 23 04:25:13 localhost python3.9[198773]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:14 localhost python3.9[198883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:14 localhost python3.9[198993]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:15 localhost python3.9[199103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:16 localhost python3.9[199213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32728 DF PROTO=TCP SPT=39752 DPT=9101 SEQ=2353497346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C82190000000001030307) 
Nov 23 04:25:18 localhost python3.9[199323]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:18 localhost python3.9[199433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:19 localhost python3.9[199521]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889918.368396-2308-83288214168712/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:19 localhost python3.9[199631]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60435 DF PROTO=TCP SPT=51996 DPT=9100 SEQ=2200901694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C8CB70000000001030307) 
Nov 23 04:25:21 localhost python3.9[199719]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889919.5035279-2308-157485778469998/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:21 localhost python3.9[199829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:22 localhost python3.9[199917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889921.2929308-2308-260006251521323/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:22 localhost python3.9[200027]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60437 DF PROTO=TCP SPT=51996 DPT=9100 SEQ=2200901694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758C98D90000000001030307) 
Nov 23 04:25:23 localhost python3.9[200115]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889922.4816942-2308-117763830962980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:24 localhost python3.9[200225]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:24 localhost python3.9[200313]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889923.627675-2308-74740099850120/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:25 localhost python3.9[200423]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36693 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=2246459922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CA11A0000000001030307) 
Nov 23 04:25:25 localhost python3.9[200511]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889924.7203362-2308-174172373756079/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:26 localhost python3.9[200621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:26 localhost python3.9[200709]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889925.8953004-2308-233824169432849/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:27 localhost python3.9[200819]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:28 localhost python3.9[200907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889927.0296364-2308-2100926751617/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:28 localhost python3.9[201017]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:29 localhost python3.9[201105]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889928.1715097-2308-281123040511390/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36694 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=2246459922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CB0D90000000001030307) 
Nov 23 04:25:29 localhost python3.9[201215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:30 localhost python3.9[201303]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889929.2360256-2308-39845003058256/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:30 localhost python3.9[201413]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:31 localhost python3.9[201501]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889930.3874133-2308-3760312045294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:32 localhost python3.9[201611]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:32 localhost python3.9[201699]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889931.8706324-2308-62283887727932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:33 localhost python3.9[201809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:34 localhost python3.9[201897]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889932.989155-2308-261131756602446/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:35 localhost python3.9[202007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60439 DF PROTO=TCP SPT=51996 DPT=9100 SEQ=2200901694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CC8DA0000000001030307) 
Nov 23 04:25:35 localhost python3.9[202095]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889934.5916004-2308-207314896898669/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:35 localhost sshd[202148]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:25:36 localhost python3.9[202280]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:25:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36695 DF PROTO=TCP SPT=56126 DPT=9105 SEQ=2246459922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CD0D90000000001030307) 
Nov 23 04:25:37 localhost python3.9[202444]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 23 04:25:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9173 DF PROTO=TCP SPT=34962 DPT=9102 SEQ=27927168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CD4D90000000001030307) 
Nov 23 04:25:38 localhost python3.9[202572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:25:38 localhost systemd[1]: Reloading.
Nov 23 04:25:38 localhost systemd-rc-local-generator[202599]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:25:38 localhost systemd-sysv-generator[202602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon socket...
Nov 23 04:25:39 localhost systemd[1]: Listening on libvirt logging daemon socket.
Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon admin socket...
Nov 23 04:25:39 localhost systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 23 04:25:39 localhost systemd[1]: Starting libvirt logging daemon...
Nov 23 04:25:39 localhost sshd[202615]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:25:39 localhost systemd[1]: Started libvirt logging daemon.
Nov 23 04:25:40 localhost python3.9[202726]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:25:40 localhost systemd[1]: Reloading.
Nov 23 04:25:40 localhost systemd-sysv-generator[202756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:25:40 localhost systemd-rc-local-generator[202750]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon socket...
Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 23 04:25:40 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 23 04:25:40 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 23 04:25:40 localhost systemd[1]: Started libvirt nodedev daemon.
Nov 23 04:25:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:25:40 localhost systemd[1]: tmp-crun.7AFaZg.mount: Deactivated successfully.
Nov 23 04:25:41 localhost podman[202902]: 2025-11-23 09:25:41.00030638 +0000 UTC m=+0.092630022 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 23 04:25:41 localhost podman[202902]: 2025-11-23 09:25:41.041887656 +0000 UTC m=+0.134211268 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 04:25:41 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:25:41 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 23 04:25:41 localhost python3.9[202901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:25:41 localhost systemd[1]: Reloading.
Nov 23 04:25:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59634 DF PROTO=TCP SPT=48754 DPT=9882 SEQ=3640952179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CDF4D0000000001030307) 
Nov 23 04:25:41 localhost systemd-rc-local-generator[202953]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:25:41 localhost systemd-sysv-generator[202958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:41 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 23 04:25:41 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon socket...
Nov 23 04:25:41 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon socket.
Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 23 04:25:41 localhost systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 23 04:25:41 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 23 04:25:41 localhost systemd[1]: Started libvirt proxy daemon.
Nov 23 04:25:42 localhost python3.9[203108]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:25:42 localhost systemd[1]: Reloading.
Nov 23 04:25:42 localhost systemd-rc-local-generator[203146]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:25:42 localhost podman[203110]: 2025-11-23 09:25:42.491637398 +0000 UTC m=+0.110197832 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 23 04:25:42 localhost systemd-sysv-generator[203149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:25:42 localhost setroubleshoot[202928]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 312adb72-ba7a-42d5-b216-0caeb7041066
Nov 23 04:25:42 localhost setroubleshoot[202928]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 04:25:42 localhost setroubleshoot[202928]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 312adb72-ba7a-42d5-b216-0caeb7041066
Nov 23 04:25:42 localhost setroubleshoot[202928]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 23 04:25:42 localhost podman[203110]: 2025-11-23 09:25:42.529161053 +0000 UTC m=+0.147721517 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:42 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt locking daemon socket.
Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon socket...
Nov 23 04:25:42 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 23 04:25:42 localhost systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 23 04:25:42 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 23 04:25:42 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 23 04:25:42 localhost systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 23 04:25:42 localhost systemd[1]: Started libvirt QEMU daemon.
Nov 23 04:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56524 DF PROTO=TCP SPT=33552 DPT=9101 SEQ=1183248874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CE7990000000001030307) 
Nov 23 04:25:43 localhost python3.9[203299]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:25:43 localhost systemd[1]: Reloading.
Nov 23 04:25:43 localhost systemd-rc-local-generator[203326]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:25:43 localhost systemd-sysv-generator[203330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon socket...
Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon socket.
Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon admin socket...
Nov 23 04:25:43 localhost systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 23 04:25:43 localhost systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 23 04:25:43 localhost systemd[1]: Started libvirt secret daemon.
Nov 23 04:25:44 localhost python3.9[203470]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:45 localhost python3.9[203580]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:25:46 localhost python3.9[203690]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:25:47 localhost python3.9[203802]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56525 DF PROTO=TCP SPT=33552 DPT=9101 SEQ=1183248874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758CF75A0000000001030307) 
Nov 23 04:25:48 localhost python3.9[203910]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:48 localhost python3.9[203996]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889947.6075575-3173-211659065287947/.source.xml follow=False _original_basename=secret.xml.j2 checksum=08854374a51612ae60ccb5be5d56c7ff5bc71f08 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:49 localhost python3.9[204106]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 46550e70-79cb-5f55-bf6d-1204b97e083b#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:25:49 localhost python3.9[204226]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4796 DF PROTO=TCP SPT=51234 DPT=9100 SEQ=4035676675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D01E70000000001030307) 
Nov 23 04:25:52 localhost python3.9[204563]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:52 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 23 04:25:52 localhost systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 23 04:25:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4798 DF PROTO=TCP SPT=51234 DPT=9100 SEQ=4035676675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D0DD90000000001030307) 
Nov 23 04:25:53 localhost python3.9[204673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:53 localhost python3.9[204761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889952.9490535-3338-74770679644522/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:54 localhost python3.9[204871]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2924 DF PROTO=TCP SPT=55600 DPT=9105 SEQ=1044109876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D165A0000000001030307) 
Nov 23 04:25:55 localhost python3.9[204981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:55 localhost python3.9[205038]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:56 localhost python3.9[205148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:57 localhost python3.9[205205]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n_x0ywop recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:57 localhost python3.9[205315]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:25:58 localhost python3.9[205372]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:25:59 localhost python3.9[205482]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:25:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2925 DF PROTO=TCP SPT=55600 DPT=9105 SEQ=1044109876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D26190000000001030307) 
Nov 23 04:25:59 localhost python3[205593]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 04:26:00 localhost python3.9[205703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:00 localhost python3.9[205760]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:01 localhost python3.9[205870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:02 localhost python3.9[205927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:02 localhost python3.9[206037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:04 localhost python3.9[206094]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:04 localhost python3.9[206204]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:05 localhost python3.9[206261]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4800 DF PROTO=TCP SPT=51234 DPT=9100 SEQ=4035676675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D3ED90000000001030307) 
Nov 23 04:26:06 localhost python3.9[206371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:07 localhost python3.9[206461]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763889965.6221492-3713-42413351303383/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2926 DF PROTO=TCP SPT=55600 DPT=9105 SEQ=1044109876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D46D90000000001030307) 
Nov 23 04:26:07 localhost python3.9[206571]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:08 localhost python3.9[206681]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31184 DF PROTO=TCP SPT=57712 DPT=9102 SEQ=1360744699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D4AD90000000001030307) 
Nov 23 04:26:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:26:09.222 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:26:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:26:09.223 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:26:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:26:09.224 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:26:09 localhost python3.9[206794]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:10 localhost python3.9[206904]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:26:11 localhost podman[207016]: 2025-11-23 09:26:11.161934089 +0000 UTC m=+0.070640811 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 04:26:11 localhost podman[207016]: 2025-11-23 09:26:11.200905287 +0000 UTC m=+0.109612019 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 04:26:11 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:26:11 localhost python3.9[207015]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:26:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=735 DF PROTO=TCP SPT=32834 DPT=9882 SEQ=251912091 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D547C0000000001030307) 
Nov 23 04:26:11 localhost python3.9[207152]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:12 localhost python3.9[207265]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:26:13 localhost podman[207354]: 2025-11-23 09:26:13.172746852 +0000 UTC m=+0.078758907 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:26:13 localhost podman[207354]: 2025-11-23 09:26:13.177695804 +0000 UTC m=+0.083707839 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:26:13 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:26:13 localhost python3.9[207385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52155 DF PROTO=TCP SPT=60880 DPT=9101 SEQ=1648387185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D5CD90000000001030307) 
Nov 23 04:26:13 localhost python3.9[207483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889972.9208016-3929-242459391368892/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:14 localhost python3.9[207593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:15 localhost python3.9[207681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889974.047387-3974-6220791665011/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:15 localhost python3.9[207791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:26:16 localhost sshd[207825]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:26:17 localhost python3.9[207881]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763889975.191016-4020-63735207874245/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52156 DF PROTO=TCP SPT=60880 DPT=9101 SEQ=1648387185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D6C9A0000000001030307) 
Nov 23 04:26:18 localhost python3.9[207991]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:26:18 localhost systemd[1]: Reloading.
Nov 23 04:26:18 localhost systemd-sysv-generator[208019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:26:18 localhost systemd-rc-local-generator[208014]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:18 localhost sshd[208028]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:26:18 localhost systemd[1]: Reached target edpm_libvirt.target.
Nov 23 04:26:19 localhost python3.9[208143]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 23 04:26:19 localhost systemd[1]: Reloading.
Nov 23 04:26:20 localhost systemd-rc-local-generator[208171]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:26:20 localhost systemd-sysv-generator[208175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33226 DF PROTO=TCP SPT=53142 DPT=9100 SEQ=4275062429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D77170000000001030307) 
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: Reloading.
Nov 23 04:26:20 localhost systemd-rc-local-generator[208205]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:26:20 localhost systemd-sysv-generator[208211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:21 localhost systemd[1]: session-52.scope: Deactivated successfully.
Nov 23 04:26:21 localhost systemd[1]: session-52.scope: Consumed 3min 39.133s CPU time.
Nov 23 04:26:21 localhost systemd-logind[761]: Session 52 logged out. Waiting for processes to exit.
Nov 23 04:26:21 localhost systemd-logind[761]: Removed session 52.
Nov 23 04:26:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33228 DF PROTO=TCP SPT=53142 DPT=9100 SEQ=4275062429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D831A0000000001030307) 
Nov 23 04:26:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34605 DF PROTO=TCP SPT=36226 DPT=9105 SEQ=4204139097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D8B990000000001030307) 
Nov 23 04:26:26 localhost sshd[208235]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:26:26 localhost systemd-logind[761]: New session 53 of user zuul.
Nov 23 04:26:26 localhost systemd[1]: Started Session 53 of User zuul.
Nov 23 04:26:27 localhost python3.9[208346]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:26:28 localhost python3.9[208458]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:26:29 localhost network[208475]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:26:29 localhost network[208476]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:26:29 localhost network[208477]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:26:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34606 DF PROTO=TCP SPT=36226 DPT=9105 SEQ=4204139097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758D9B590000000001030307) 
Nov 23 04:26:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:34 localhost python3.9[208709]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:26:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33230 DF PROTO=TCP SPT=53142 DPT=9100 SEQ=4275062429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DB2D90000000001030307) 
Nov 23 04:26:35 localhost python3.9[208772]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34607 DF PROTO=TCP SPT=36226 DPT=9105 SEQ=4204139097 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DBAD90000000001030307) 
Nov 23 04:26:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3283 DF PROTO=TCP SPT=49896 DPT=9102 SEQ=20422500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DBED90000000001030307) 
Nov 23 04:26:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15942 DF PROTO=TCP SPT=36014 DPT=9882 SEQ=1941593680 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DC9AC0000000001030307) 
Nov 23 04:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:26:42 localhost systemd[1]: tmp-crun.EkYTaQ.mount: Deactivated successfully.
Nov 23 04:26:42 localhost podman[208861]: 2025-11-23 09:26:42.179694009 +0000 UTC m=+0.079295692 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 23 04:26:42 localhost podman[208861]: 2025-11-23 09:26:42.21506559 +0000 UTC m=+0.114667273 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:26:42 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:26:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:26:43 localhost podman[208996]: 2025-11-23 09:26:43.342993263 +0000 UTC m=+0.082812196 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:26:43 localhost podman[208996]: 2025-11-23 09:26:43.377287466 +0000 UTC m=+0.117106409 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:26:43 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:26:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9920 DF PROTO=TCP SPT=50394 DPT=9101 SEQ=1940988877 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DD2190000000001030307) 
Nov 23 04:26:43 localhost python3.9[208995]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:26:44 localhost sshd[209124]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:26:44 localhost python3.9[209123]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:45 localhost sshd[209200]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:26:45 localhost python3.9[209236]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:46 localhost python3.9[209347]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:46 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:e7:d2:09 MACPROTO=0800 SRC=18.219.193.156 DST=38.102.83.162 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=38343 DF PROTO=TCP SPT=43792 DPT=9090 SEQ=3239036707 ACK=0 WINDOW=62727 RES=0x00 SYN URGP=0 OPT (020405B40402080AA9C346170000000001030307) 
Nov 23 04:26:47 localhost python3.9[209458]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:26:48 localhost python3.9[209569]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:26:49 localhost python3.9[209681]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:26:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54774 DF PROTO=TCP SPT=34936 DPT=9100 SEQ=3715368086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DEC470000000001030307) 
Nov 23 04:26:50 localhost python3.9[209791]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:26:50 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 23 04:26:51 localhost python3.9[209905]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:26:51 localhost systemd[1]: Reloading.
Nov 23 04:26:51 localhost systemd-rc-local-generator[209931]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:26:51 localhost systemd-sysv-generator[209937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:26:52 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 23 04:26:52 localhost systemd[1]: Starting Open-iSCSI...
Nov 23 04:26:52 localhost iscsid[209946]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Nov 23 04:26:52 localhost iscsid[209946]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Nov 23 04:26:52 localhost iscsid[209946]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Nov 23 04:26:52 localhost iscsid[209946]: If using hardware iscsi like qla4xxx this message can be ignored.
Nov 23 04:26:52 localhost iscsid[209946]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Nov 23 04:26:52 localhost iscsid[209946]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Nov 23 04:26:52 localhost iscsid[209946]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf
Nov 23 04:26:52 localhost systemd[1]: Started Open-iSCSI.
Nov 23 04:26:52 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 23 04:26:52 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 23 04:26:53 localhost python3.9[210057]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:26:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54776 DF PROTO=TCP SPT=34936 DPT=9100 SEQ=3715368086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758DF8590000000001030307) 
Nov 23 04:26:53 localhost network[210074]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:26:53 localhost network[210075]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:26:53 localhost network[210076]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:26:55 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 23 04:26:55 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 23 04:26:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3574 DF PROTO=TCP SPT=55758 DPT=9105 SEQ=681853540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E00990000000001030307) 
Nov 23 04:26:55 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l b7fb1714-987a-4876-97b9-483eaa676bb7
Nov 23 04:26:56 localhost setroubleshoot[210089]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012*****  Plugin catchall (100. confidence) suggests   **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012
Nov 23 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:26:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3575 DF PROTO=TCP SPT=55758 DPT=9105 SEQ=681853540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E10590000000001030307) 
Nov 23 04:26:59 localhost python3.9[210325]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 04:27:00 localhost python3.9[210435]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 23 04:27:00 localhost python3.9[210549]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:01 localhost python3.9[210637]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890020.492283-458-193709213712449/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:02 localhost python3.9[210747]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:03 localhost python3.9[210857]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:27:03 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 04:27:03 localhost systemd[1]: Stopped Load Kernel Modules.
Nov 23 04:27:03 localhost systemd[1]: Stopping Load Kernel Modules...
Nov 23 04:27:03 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 04:27:03 localhost systemd-modules-load[210861]: Module 'msr' is built in
Nov 23 04:27:03 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 04:27:04 localhost python3.9[210971]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:05 localhost python3.9[211081]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54778 DF PROTO=TCP SPT=34936 DPT=9100 SEQ=3715368086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E28D90000000001030307) 
Nov 23 04:27:05 localhost python3.9[211191]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:06 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Nov 23 04:27:06 localhost systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 23 04:27:06 localhost python3.9[211301]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:07 localhost python3.9[211390]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890026.1469884-631-1281095504970/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3576 DF PROTO=TCP SPT=55758 DPT=9105 SEQ=681853540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E30DA0000000001030307) 
Nov 23 04:27:07 localhost python3.9[211500]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:27:08 localhost sshd[211535]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:27:08 localhost python3.9[211613]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42585 DF PROTO=TCP SPT=33632 DPT=9102 SEQ=995558602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E34DA0000000001030307) 
Nov 23 04:27:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:27:09.224 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:27:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:27:09.225 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:27:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:27:09.226 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:27:09 localhost python3.9[211723]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:09 localhost sshd[211724]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:27:10 localhost python3.9[211835]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28696 DF PROTO=TCP SPT=54304 DPT=9882 SEQ=2621771300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E3EDD0000000001030307) 
Nov 23 04:27:11 localhost python3.9[211945]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:12 localhost python3.9[212055]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:27:13 localhost podman[212148]: 2025-11-23 09:27:13.190666927 +0000 UTC m=+0.082699532 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:27:13 localhost podman[212148]: 2025-11-23 09:27:13.231888964 +0000 UTC m=+0.123921549 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 04:27:13 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:27:13 localhost python3.9[212178]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29579 DF PROTO=TCP SPT=46110 DPT=9101 SEQ=846547848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E471A0000000001030307) 
Nov 23 04:27:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:27:13 localhost podman[212301]: 2025-11-23 09:27:13.892348552 +0000 UTC m=+0.074281137 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:27:13 localhost podman[212301]: 2025-11-23 09:27:13.898415659 +0000 UTC m=+0.080348264 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 04:27:13 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:27:14 localhost python3.9[212300]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:14 localhost python3.9[212428]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:15 localhost python3.9[212540]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:16 localhost python3.9[212650]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29580 DF PROTO=TCP SPT=46110 DPT=9101 SEQ=846547848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E56D90000000001030307) 
Nov 23 04:27:17 localhost python3.9[212760]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:18 localhost python3.9[212817]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:18 localhost python3.9[212927]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:19 localhost python3.9[212984]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:19 localhost python3.9[213094]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35417 DF PROTO=TCP SPT=44066 DPT=9100 SEQ=1413148414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E61760000000001030307) 
Nov 23 04:27:20 localhost python3.9[213204]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:20 localhost python3.9[213261]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:21 localhost python3.9[213371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:21 localhost python3.9[213428]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:22 localhost python3.9[213538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:27:22 localhost systemd[1]: Reloading.
Nov 23 04:27:22 localhost systemd-sysv-generator[213567]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:27:22 localhost systemd-rc-local-generator[213559]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:27:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35419 DF PROTO=TCP SPT=44066 DPT=9100 SEQ=1413148414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E6D990000000001030307) 
Nov 23 04:27:24 localhost python3.9[213686]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:25 localhost python3.9[213743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2606 DF PROTO=TCP SPT=59930 DPT=9105 SEQ=2786596538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E75D90000000001030307) 
Nov 23 04:27:25 localhost python3.9[213853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:27 localhost python3.9[213910]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:28 localhost python3.9[214020]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:27:28 localhost systemd[1]: Reloading.
Nov 23 04:27:28 localhost systemd-rc-local-generator[214042]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:27:28 localhost systemd-sysv-generator[214048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:28 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:27:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:27:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:27:28 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:27:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2607 DF PROTO=TCP SPT=59930 DPT=9105 SEQ=2786596538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E85990000000001030307) 
Nov 23 04:27:29 localhost python3.9[214171]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:30 localhost python3.9[214281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:30 localhost python3.9[214369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890049.7362113-1253-102646986308995/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:31 localhost python3.9[214479]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:27:32 localhost python3.9[214589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:33 localhost python3.9[214677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890051.9762046-1327-264110947120757/.source.json _original_basename=.723ns9it follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:33 localhost python3.9[214787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35421 DF PROTO=TCP SPT=44066 DPT=9100 SEQ=1413148414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758E9CD90000000001030307) 
Nov 23 04:27:36 localhost python3.9[215095]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 23 04:27:37 localhost python3.9[215205]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:27:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2608 DF PROTO=TCP SPT=59930 DPT=9105 SEQ=2786596538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EA6D90000000001030307) 
Nov 23 04:27:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20086 DF PROTO=TCP SPT=58534 DPT=9102 SEQ=2591645655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EA8DA0000000001030307) 
Nov 23 04:27:38 localhost python3.9[215315]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:27:40 localhost podman[215468]: 2025-11-23 09:27:40.407029306 +0000 UTC m=+0.087861636 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:27:40 localhost podman[215468]: 2025-11-23 09:27:40.51796006 +0000 UTC m=+0.198792450 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, io.openshift.expose-services=, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=553, com.redhat.component=rhceph-container)
Nov 23 04:27:40 localhost systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 23 04:27:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15293 DF PROTO=TCP SPT=46262 DPT=9882 SEQ=2320351549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EB40C0000000001030307) 
Nov 23 04:27:41 localhost systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 23 04:27:42 localhost python3[215712]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:27:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64397 DF PROTO=TCP SPT=50938 DPT=9101 SEQ=1542566406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EBC590000000001030307) 
Nov 23 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:27:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:27:44 localhost podman[215740]: 2025-11-23 09:27:44.188486677 +0000 UTC m=+0.090910471 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 04:27:44 localhost podman[215740]: 2025-11-23 09:27:44.19525489 +0000 UTC m=+0.097678684 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 23 04:27:44 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:27:44 localhost podman[215741]: 2025-11-23 09:27:44.296450509 +0000 UTC m=+0.198871453 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:27:44 localhost podman[215741]: 2025-11-23 09:27:44.332940032 +0000 UTC m=+0.235360966 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:27:44 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:27:44 localhost podman[215726]: 2025-11-23 09:27:43.052073935 +0000 UTC m=+0.046303131 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 23 04:27:45 localhost podman[215816]: 
Nov 23 04:27:45 localhost podman[215816]: 2025-11-23 09:27:45.150515191 +0000 UTC m=+0.072780502 container create aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:27:45 localhost podman[215816]: 2025-11-23 09:27:45.113972617 +0000 UTC m=+0.036237938 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 23 04:27:45 localhost python3[215712]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 23 04:27:46 localhost python3.9[215964]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:47 localhost python3.9[216076]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64398 DF PROTO=TCP SPT=50938 DPT=9101 SEQ=1542566406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758ECC190000000001030307) 
Nov 23 04:27:47 localhost python3.9[216131]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:48 localhost python3.9[216240]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890067.5164812-1592-16556478495542/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:48 localhost python3.9[216295]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:27:48 localhost systemd[1]: Reloading.
Nov 23 04:27:48 localhost systemd-rc-local-generator[216320]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:27:48 localhost systemd-sysv-generator[216323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:49 localhost python3.9[216386]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:27:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27621 DF PROTO=TCP SPT=38468 DPT=9100 SEQ=3399878801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758ED6A90000000001030307) 
Nov 23 04:27:50 localhost systemd[1]: Reloading.
Nov 23 04:27:50 localhost systemd-sysv-generator[216414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:27:50 localhost systemd-rc-local-generator[216411]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:27:50 localhost systemd[1]: Starting multipathd container...
Nov 23 04:27:51 localhost systemd[1]: Started libcrun container.
Nov 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7feee40b282b84887d5d08deb1fd865ed9d6f7111d19820854dfc049bb4252/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 04:27:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7feee40b282b84887d5d08deb1fd865ed9d6f7111d19820854dfc049bb4252/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 04:27:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:27:51 localhost podman[216426]: 2025-11-23 09:27:51.15618739 +0000 UTC m=+0.149509377 container init aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 04:27:51 localhost multipathd[216440]: + sudo -E kolla_set_configs
Nov 23 04:27:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:27:51 localhost podman[216426]: 2025-11-23 09:27:51.189801654 +0000 UTC m=+0.183123611 container start aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 04:27:51 localhost podman[216426]: multipathd
Nov 23 04:27:51 localhost systemd[1]: Started multipathd container.
Nov 23 04:27:51 localhost multipathd[216440]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:27:51 localhost multipathd[216440]: INFO:__main__:Validating config file
Nov 23 04:27:51 localhost multipathd[216440]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:27:51 localhost multipathd[216440]: INFO:__main__:Writing out command to execute
Nov 23 04:27:51 localhost multipathd[216440]: ++ cat /run_command
Nov 23 04:27:51 localhost multipathd[216440]: + CMD='/usr/sbin/multipathd -d'
Nov 23 04:27:51 localhost multipathd[216440]: + ARGS=
Nov 23 04:27:51 localhost multipathd[216440]: + sudo kolla_copy_cacerts
Nov 23 04:27:51 localhost podman[216448]: 2025-11-23 09:27:51.270470775 +0000 UTC m=+0.074186966 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:27:51 localhost multipathd[216440]: + [[ ! -n '' ]]
Nov 23 04:27:51 localhost multipathd[216440]: + . kolla_extend_start
Nov 23 04:27:51 localhost multipathd[216440]: Running command: '/usr/sbin/multipathd -d'
Nov 23 04:27:51 localhost multipathd[216440]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 04:27:51 localhost multipathd[216440]: + umask 0022
Nov 23 04:27:51 localhost multipathd[216440]: + exec /usr/sbin/multipathd -d
Nov 23 04:27:51 localhost multipathd[216440]: 10014.522366 | --------start up--------
Nov 23 04:27:51 localhost multipathd[216440]: 10014.522385 | read /etc/multipath.conf
Nov 23 04:27:51 localhost multipathd[216440]: 10014.525676 | path checkers start up
Nov 23 04:27:51 localhost podman[216448]: 2025-11-23 09:27:51.30994655 +0000 UTC m=+0.113662701 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:27:51 localhost podman[216448]: unhealthy
Nov 23 04:27:51 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:27:51 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Failed with result 'exit-code'.
Nov 23 04:27:51 localhost systemd[1]: virtqemud.service: Deactivated successfully.
Nov 23 04:27:51 localhost systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 04:27:52 localhost python3.9[216588]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:27:53 localhost python3.9[216700]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27623 DF PROTO=TCP SPT=38468 DPT=9100 SEQ=3399878801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EE29A0000000001030307) 
Nov 23 04:27:54 localhost python3.9[216823]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:27:55 localhost systemd[1]: Stopping multipathd container...
Nov 23 04:27:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56426 DF PROTO=TCP SPT=54410 DPT=9105 SEQ=2817176458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EEB190000000001030307) 
Nov 23 04:27:55 localhost multipathd[216440]: 10018.638272 | exit (signal)
Nov 23 04:27:55 localhost multipathd[216440]: 10018.638782 | --------shut down-------
Nov 23 04:27:55 localhost systemd[1]: libpod-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.scope: Deactivated successfully.
Nov 23 04:27:55 localhost podman[216827]: 2025-11-23 09:27:55.438672493 +0000 UTC m=+0.100018831 container died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:27:55 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.timer: Deactivated successfully.
Nov 23 04:27:55 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:27:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9-userdata-shm.mount: Deactivated successfully.
Nov 23 04:27:55 localhost podman[216827]: 2025-11-23 09:27:55.513721449 +0000 UTC m=+0.175067737 container cleanup aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:27:55 localhost podman[216827]: multipathd
Nov 23 04:27:55 localhost podman[216855]: 2025-11-23 09:27:55.59520304 +0000 UTC m=+0.042422728 container cleanup aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:27:55 localhost podman[216855]: multipathd
Nov 23 04:27:55 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 23 04:27:55 localhost systemd[1]: Stopped multipathd container.
Nov 23 04:27:55 localhost systemd[1]: Starting multipathd container...
Nov 23 04:27:55 localhost systemd[1]: Started libcrun container.
Nov 23 04:27:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7feee40b282b84887d5d08deb1fd865ed9d6f7111d19820854dfc049bb4252/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 04:27:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7feee40b282b84887d5d08deb1fd865ed9d6f7111d19820854dfc049bb4252/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 04:27:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:27:55 localhost podman[216868]: 2025-11-23 09:27:55.73948102 +0000 UTC m=+0.117781880 container init aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:27:55 localhost multipathd[216883]: + sudo -E kolla_set_configs
Nov 23 04:27:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:27:55 localhost podman[216868]: 2025-11-23 09:27:55.782111381 +0000 UTC m=+0.160412231 container start aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 04:27:55 localhost podman[216868]: multipathd
Nov 23 04:27:55 localhost systemd[1]: Started multipathd container.
Nov 23 04:27:55 localhost multipathd[216883]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:27:55 localhost multipathd[216883]: INFO:__main__:Validating config file
Nov 23 04:27:55 localhost multipathd[216883]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:27:55 localhost multipathd[216883]: INFO:__main__:Writing out command to execute
Nov 23 04:27:55 localhost multipathd[216883]: ++ cat /run_command
Nov 23 04:27:55 localhost multipathd[216883]: + CMD='/usr/sbin/multipathd -d'
Nov 23 04:27:55 localhost multipathd[216883]: + ARGS=
Nov 23 04:27:55 localhost multipathd[216883]: + sudo kolla_copy_cacerts
Nov 23 04:27:55 localhost podman[216892]: 2025-11-23 09:27:55.869839974 +0000 UTC m=+0.082047687 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:27:55 localhost multipathd[216883]: + [[ ! -n '' ]]
Nov 23 04:27:55 localhost multipathd[216883]: + . kolla_extend_start
Nov 23 04:27:55 localhost multipathd[216883]: Running command: '/usr/sbin/multipathd -d'
Nov 23 04:27:55 localhost multipathd[216883]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 23 04:27:55 localhost multipathd[216883]: + umask 0022
Nov 23 04:27:55 localhost multipathd[216883]: + exec /usr/sbin/multipathd -d
Nov 23 04:27:55 localhost multipathd[216883]: 10019.116955 | --------start up--------
Nov 23 04:27:55 localhost multipathd[216883]: 10019.116977 | read /etc/multipath.conf
Nov 23 04:27:55 localhost multipathd[216883]: 10019.120509 | path checkers start up
Nov 23 04:27:55 localhost podman[216892]: 2025-11-23 09:27:55.912045305 +0000 UTC m=+0.124253048 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 23 04:27:55 localhost podman[216892]: unhealthy
Nov 23 04:27:55 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:27:55 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Failed with result 'exit-code'.
Nov 23 04:27:56 localhost python3.9[217029]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:27:57 localhost python3.9[217139]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 04:27:58 localhost python3.9[217249]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 23 04:27:58 localhost python3.9[217367]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:27:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56427 DF PROTO=TCP SPT=54410 DPT=9105 SEQ=2817176458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758EFAD90000000001030307) 
Nov 23 04:27:59 localhost python3.9[217455]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890078.4747498-1832-85773879985952/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:00 localhost python3.9[217565]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:01 localhost python3.9[217675]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:28:01 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 23 04:28:01 localhost systemd[1]: Stopped Load Kernel Modules.
Nov 23 04:28:01 localhost systemd[1]: Stopping Load Kernel Modules...
Nov 23 04:28:01 localhost systemd[1]: Starting Load Kernel Modules...
Nov 23 04:28:01 localhost systemd-modules-load[217679]: Module 'msr' is built in
Nov 23 04:28:01 localhost systemd[1]: Finished Load Kernel Modules.
Nov 23 04:28:02 localhost python3.9[217789]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:28:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27625 DF PROTO=TCP SPT=38468 DPT=9100 SEQ=3399878801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F12D90000000001030307) 
Nov 23 04:28:06 localhost systemd[1]: Reloading.
Nov 23 04:28:06 localhost systemd-rc-local-generator[217823]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:28:06 localhost systemd-sysv-generator[217826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: Reloading.
Nov 23 04:28:06 localhost systemd-rc-local-generator[217859]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:28:06 localhost systemd-sysv-generator[217865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd-logind[761]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 23 04:28:07 localhost systemd-logind[761]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 23 04:28:07 localhost lvm[217910]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 04:28:07 localhost lvm[217911]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 04:28:07 localhost lvm[217910]: VG ceph_vg0 finished
Nov 23 04:28:07 localhost lvm[217911]: VG ceph_vg1 finished
Nov 23 04:28:07 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 23 04:28:07 localhost systemd[1]: Starting man-db-cache-update.service...
Nov 23 04:28:07 localhost systemd[1]: Reloading.
Nov 23 04:28:07 localhost systemd-sysv-generator[217963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:28:07 localhost systemd-rc-local-generator[217957]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56428 DF PROTO=TCP SPT=54410 DPT=9105 SEQ=2817176458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F1AD90000000001030307) 
Nov 23 04:28:07 localhost systemd[1]: Queuing reload/restart jobs for marked units…
Nov 23 04:28:08 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 23 04:28:08 localhost systemd[1]: Finished man-db-cache-update.service.
Nov 23 04:28:08 localhost systemd[1]: man-db-cache-update.service: Consumed 1.341s CPU time.
Nov 23 04:28:08 localhost systemd[1]: run-r9248c9ccbdd1438997fb5ec95789a345.service: Deactivated successfully.
Nov 23 04:28:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13981 DF PROTO=TCP SPT=58200 DPT=9102 SEQ=4216325670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F1ED90000000001030307) 
Nov 23 04:28:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:28:09.225 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:28:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:28:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:28:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:28:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:28:09 localhost python3.9[219205]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:28:10 localhost python3.9[219319]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51036 DF PROTO=TCP SPT=46576 DPT=9882 SEQ=755332918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F293C0000000001030307) 
Nov 23 04:28:11 localhost python3.9[219429]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:28:11 localhost systemd[1]: Reloading.
Nov 23 04:28:12 localhost systemd-rc-local-generator[219454]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:28:12 localhost systemd-sysv-generator[219458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:12 localhost python3.9[219573]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:28:13 localhost network[219590]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:28:13 localhost network[219591]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:28:13 localhost network[219592]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5749 DF PROTO=TCP SPT=49570 DPT=9101 SEQ=1357951428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F31990000000001030307) 
Nov 23 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:28:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:14 localhost podman[219631]: 2025-11-23 09:28:14.341733958 +0000 UTC m=+0.096325287 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:28:14 localhost podman[219631]: 2025-11-23 09:28:14.37387957 +0000 UTC m=+0.128470939 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:28:14 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:28:14 localhost podman[219654]: 2025-11-23 09:28:14.484634924 +0000 UTC m=+0.098866744 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 23 04:28:14 localhost podman[219654]: 2025-11-23 09:28:14.551143336 +0000 UTC m=+0.165375156 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:28:14 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:28:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5750 DF PROTO=TCP SPT=49570 DPT=9101 SEQ=1357951428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F41590000000001030307) 
Nov 23 04:28:18 localhost python3.9[219870]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:19 localhost python3.9[219981]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52458 DF PROTO=TCP SPT=33204 DPT=9100 SEQ=435485712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F4BD80000000001030307) 
Nov 23 04:28:20 localhost python3.9[220092]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:22 localhost python3.9[220203]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:23 localhost python3.9[220314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52460 DF PROTO=TCP SPT=33204 DPT=9100 SEQ=435485712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F57D90000000001030307) 
Nov 23 04:28:23 localhost python3.9[220425]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25136 DF PROTO=TCP SPT=55880 DPT=9105 SEQ=3648986047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F605A0000000001030307) 
Nov 23 04:28:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:28:26 localhost python3.9[220536]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:26 localhost podman[220537]: 2025-11-23 09:28:26.178968424 +0000 UTC m=+0.084512677 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:28:26 localhost podman[220537]: 2025-11-23 09:28:26.196954185 +0000 UTC m=+0.102498428 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 04:28:26 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:28:26 localhost python3.9[220665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:28:28 localhost python3.9[220776]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:29 localhost python3.9[220886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25137 DF PROTO=TCP SPT=55880 DPT=9105 SEQ=3648986047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F701A0000000001030307) 
Nov 23 04:28:29 localhost python3.9[220996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:30 localhost python3.9[221106]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:30 localhost python3.9[221216]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:31 localhost sshd[221293]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:28:31 localhost python3.9[221328]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:32 localhost python3.9[221438]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:32 localhost python3.9[221548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:33 localhost python3.9[221658]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:33 localhost python3.9[221768]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:34 localhost sshd[221769]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:28:34 localhost python3.9[221880]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:35 localhost python3.9[221990]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52462 DF PROTO=TCP SPT=33204 DPT=9100 SEQ=435485712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F88DA0000000001030307) 
Nov 23 04:28:35 localhost python3.9[222100]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:36 localhost python3.9[222210]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:37 localhost python3.9[222320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25138 DF PROTO=TCP SPT=55880 DPT=9105 SEQ=3648986047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F90D90000000001030307) 
Nov 23 04:28:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62578 DF PROTO=TCP SPT=34622 DPT=9102 SEQ=3479585901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F94D90000000001030307) 
Nov 23 04:28:39 localhost python3.9[222430]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:28:40 localhost python3.9[222540]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:41 localhost python3.9[222650]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:28:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59147 DF PROTO=TCP SPT=52156 DPT=9882 SEQ=3249464506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758F9E6C0000000001030307) 
Nov 23 04:28:42 localhost python3.9[222760]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:28:42 localhost systemd[1]: Reloading.
Nov 23 04:28:42 localhost systemd-sysv-generator[222789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:28:42 localhost systemd-rc-local-generator[222785]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:28:43 localhost python3.9[222959]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60585 DF PROTO=TCP SPT=41618 DPT=9101 SEQ=2382437053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FA6D90000000001030307) 
Nov 23 04:28:43 localhost python3.9[223084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:44 localhost python3.9[223213]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:28:44 localhost podman[223215]: 2025-11-23 09:28:44.501719575 +0000 UTC m=+0.074794412 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:28:44 localhost podman[223215]: 2025-11-23 09:28:44.532862881 +0000 UTC m=+0.105937758 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:28:44 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:28:44 localhost systemd[1]: tmp-crun.Ag3yOC.mount: Deactivated successfully.
Nov 23 04:28:44 localhost podman[223342]: 2025-11-23 09:28:44.967367641 +0000 UTC m=+0.092037444 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:28:45 localhost podman[223342]: 2025-11-23 09:28:45.005888511 +0000 UTC m=+0.130558354 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 04:28:45 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:28:45 localhost python3.9[223341]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:45 localhost python3.9[223476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:46 localhost python3.9[223587]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:46 localhost python3.9[223698]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:47 localhost python3.9[223809]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60586 DF PROTO=TCP SPT=41618 DPT=9101 SEQ=2382437053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FB69A0000000001030307) 
Nov 23 04:28:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26389 DF PROTO=TCP SPT=52674 DPT=9100 SEQ=2691478627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FC10A0000000001030307) 
Nov 23 04:28:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26391 DF PROTO=TCP SPT=52674 DPT=9100 SEQ=2691478627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FCD190000000001030307) 
Nov 23 04:28:53 localhost python3.9[223920]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:54 localhost python3.9[224030]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:55 localhost python3.9[224140]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16909 DF PROTO=TCP SPT=38576 DPT=9105 SEQ=3025655798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FD5590000000001030307) 
Nov 23 04:28:55 localhost python3.9[224250]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:56 localhost python3.9[224360]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:28:56 localhost podman[224471]: 2025-11-23 09:28:56.829382257 +0000 UTC m=+0.088167502 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:28:56 localhost podman[224471]: 2025-11-23 09:28:56.841498215 +0000 UTC m=+0.100283430 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:28:56 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:28:56 localhost python3.9[224470]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:57 localhost python3.9[224599]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:58 localhost python3.9[224709]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:58 localhost python3.9[224819]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:28:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16910 DF PROTO=TCP SPT=38576 DPT=9105 SEQ=3025655798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FE5190000000001030307) 
Nov 23 04:28:59 localhost python3.9[224929]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26393 DF PROTO=TCP SPT=52674 DPT=9100 SEQ=2691478627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A758FFCD90000000001030307) 
Nov 23 04:29:07 localhost python3.9[225039]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 23 04:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16911 DF PROTO=TCP SPT=38576 DPT=9105 SEQ=3025655798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759004DA0000000001030307) 
Nov 23 04:29:08 localhost python3.9[225150]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 04:29:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3303 DF PROTO=TCP SPT=35332 DPT=9102 SEQ=2755443219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759008D90000000001030307) 
Nov 23 04:29:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:29:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:29:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:29:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:29:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:29:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:29:10 localhost python3.9[225266]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 04:29:11 localhost sshd[225292]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:29:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38417 DF PROTO=TCP SPT=55360 DPT=9882 SEQ=557070468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590139C0000000001030307) 
Nov 23 04:29:11 localhost systemd-logind[761]: New session 54 of user zuul.
Nov 23 04:29:11 localhost systemd[1]: Started Session 54 of User zuul.
Nov 23 04:29:11 localhost systemd[1]: session-54.scope: Deactivated successfully.
Nov 23 04:29:11 localhost systemd-logind[761]: Session 54 logged out. Waiting for processes to exit.
Nov 23 04:29:11 localhost systemd-logind[761]: Removed session 54.
Nov 23 04:29:11 localhost sshd[225367]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:29:12 localhost python3.9[225405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:12 localhost python3.9[225491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890151.7780485-3391-82195851375810/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:13 localhost python3.9[225599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23560 DF PROTO=TCP SPT=59580 DPT=9101 SEQ=3870007375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75901BD90000000001030307) 
Nov 23 04:29:13 localhost python3.9[225654]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:14 localhost python3.9[225762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:14 localhost python3.9[225848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890154.0096323-3391-111390909813009/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:29:15 localhost systemd[1]: tmp-crun.6nTIbk.mount: Deactivated successfully.
Nov 23 04:29:15 localhost podman[225866]: 2025-11-23 09:29:15.182296338 +0000 UTC m=+0.084720739 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 04:29:15 localhost podman[225867]: 2025-11-23 09:29:15.221016767 +0000 UTC m=+0.123392906 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 23 04:29:15 localhost podman[225867]: 2025-11-23 09:29:15.261957448 +0000 UTC m=+0.164333617 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:29:15 localhost podman[225866]: 2025-11-23 09:29:15.273944132 +0000 UTC m=+0.176368613 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:29:15 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:29:15 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:29:15 localhost python3.9[226002]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:16 localhost python3.9[226088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890155.1494615-3391-218190297274633/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f7e1adb02ce1fc9821a25015c3baa66ad68c917c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:17 localhost python3.9[226196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23561 DF PROTO=TCP SPT=59580 DPT=9101 SEQ=3870007375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75902B990000000001030307) 
Nov 23 04:29:17 localhost python3.9[226282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890156.6595488-3391-256345598729952/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:18 localhost python3.9[226390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:19 localhost python3.9[226476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890157.6823964-3391-198383947404043/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55104 DF PROTO=TCP SPT=33574 DPT=9100 SEQ=166827041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759036370000000001030307) 
Nov 23 04:29:20 localhost python3.9[226586]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:29:21 localhost python3.9[226696]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:29:22 localhost python3.9[226806]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:22 localhost python3.9[226918]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55106 DF PROTO=TCP SPT=33574 DPT=9100 SEQ=166827041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759042590000000001030307) 
Nov 23 04:29:23 localhost python3.9[227026]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:24 localhost python3.9[227136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:24 localhost python3.9[227222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890163.966048-3766-38734008589644/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3893 DF PROTO=TCP SPT=47106 DPT=9105 SEQ=3638276707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75904A990000000001030307) 
Nov 23 04:29:25 localhost python3.9[227330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:29:26 localhost python3.9[227416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890165.1848505-3812-2928835018387/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:29:27 localhost python3.9[227526]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 23 04:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:29:27 localhost systemd[1]: tmp-crun.HGZdKw.mount: Deactivated successfully.
Nov 23 04:29:27 localhost podman[227527]: 2025-11-23 09:29:27.187983779 +0000 UTC m=+0.093172297 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 04:29:27 localhost podman[227527]: 2025-11-23 09:29:27.202906594 +0000 UTC m=+0.108095102 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 04:29:27 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:29:27 localhost python3.9[227657]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:29:28 localhost python3[227767]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:29:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3894 DF PROTO=TCP SPT=47106 DPT=9105 SEQ=3638276707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75905A590000000001030307) 
Nov 23 04:29:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55108 DF PROTO=TCP SPT=33574 DPT=9100 SEQ=166827041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759072D90000000001030307) 
Nov 23 04:29:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3895 DF PROTO=TCP SPT=47106 DPT=9105 SEQ=3638276707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75907ADA0000000001030307) 
Nov 23 04:29:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38767 DF PROTO=TCP SPT=43440 DPT=9102 SEQ=2830916572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75907ED90000000001030307) 
Nov 23 04:29:39 localhost podman[227781]: 2025-11-23 09:29:28.907111258 +0000 UTC m=+0.031142555 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 23 04:29:39 localhost podman[227842]: 
Nov 23 04:29:39 localhost podman[227842]: 2025-11-23 09:29:39.904255534 +0000 UTC m=+0.091297415 container create 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:29:39 localhost podman[227842]: 2025-11-23 09:29:39.862910704 +0000 UTC m=+0.049952585 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 23 04:29:39 localhost python3[227767]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 23 04:29:40 localhost python3.9[227987]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42772 DF PROTO=TCP SPT=38720 DPT=9882 SEQ=3222893815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759088CD0000000001030307) 
Nov 23 04:29:42 localhost python3.9[228099]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 23 04:29:42 localhost python3.9[228209]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34198 DF PROTO=TCP SPT=37508 DPT=9101 SEQ=3028844965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759091190000000001030307) 
Nov 23 04:29:43 localhost python3[228319]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:29:44 localhost python3[228319]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012          "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:33:31.011385583Z",#012          "Config": {#012               "User": "nova",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 1211770748,#012          "VirtualSize": 1211770748,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012                    "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012                    "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nova",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012 
Nov 23 04:29:44 localhost podman[228408]: 2025-11-23 09:29:44.214377851 +0000 UTC m=+0.092964831 container remove 6ddc3faadd652e651a72b8a897103a9f26e1c79c9f6a70402ddabc666562d199 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '83ab5b37680071f0941108e43c518cc1-54a97af4633bfad00758ecf55e783ce2'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step5)
Nov 23 04:29:44 localhost python3[228319]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Nov 23 04:29:44 localhost podman[228421]: 
Nov 23 04:29:44 localhost podman[228421]: 2025-11-23 09:29:44.320094948 +0000 UTC m=+0.087402361 container create 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:29:44 localhost podman[228421]: 2025-11-23 09:29:44.280120004 +0000 UTC m=+0.047427447 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 23 04:29:44 localhost python3[228319]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 23 04:29:45 localhost python3.9[228600]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:29:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:29:45 localhost podman[228663]: 2025-11-23 09:29:45.581444176 +0000 UTC m=+0.085805358 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 04:29:45 localhost podman[228660]: 2025-11-23 09:29:45.629414346 +0000 UTC m=+0.135754841 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 04:29:45 localhost podman[228660]: 2025-11-23 09:29:45.637144276 +0000 UTC m=+0.143484711 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:29:45 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:29:45 localhost podman[228663]: 2025-11-23 09:29:45.652870033 +0000 UTC m=+0.157231145 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:29:45 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:29:45 localhost python3.9[228775]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:29:46 localhost python3.9[228884]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890186.3975995-4086-110313907064510/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34199 DF PROTO=TCP SPT=37508 DPT=9101 SEQ=3028844965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590A0D90000000001030307) 
Nov 23 04:29:47 localhost python3.9[228939]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:29:47 localhost systemd[1]: Reloading.
Nov 23 04:29:47 localhost systemd-rc-local-generator[228962]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:29:47 localhost systemd-sysv-generator[228967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:48 localhost python3.9[229030]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:29:49 localhost systemd[1]: Reloading.
Nov 23 04:29:49 localhost systemd-sysv-generator[229061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:29:49 localhost systemd-rc-local-generator[229056]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:29:49 localhost systemd[1]: Starting nova_compute container...
Nov 23 04:29:49 localhost systemd[1]: tmp-crun.t0PWhv.mount: Deactivated successfully.
Nov 23 04:29:49 localhost systemd[1]: Started libcrun container.
Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 04:29:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 04:29:49 localhost podman[229071]: 2025-11-23 09:29:49.491698563 +0000 UTC m=+0.135371351 container init 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:29:49 localhost podman[229071]: 2025-11-23 09:29:49.504272463 +0000 UTC m=+0.147945241 container start 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 23 04:29:49 localhost podman[229071]: nova_compute
Nov 23 04:29:49 localhost nova_compute[229085]: + sudo -E kolla_set_configs
Nov 23 04:29:49 localhost systemd[1]: Started nova_compute container.
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Validating config file
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying service configuration files
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Deleting /etc/ceph
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Creating directory /etc/ceph
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Writing out command to execute
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:29:49 localhost nova_compute[229085]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:29:49 localhost nova_compute[229085]: ++ cat /run_command
Nov 23 04:29:49 localhost nova_compute[229085]: + CMD=nova-compute
Nov 23 04:29:49 localhost nova_compute[229085]: + ARGS=
Nov 23 04:29:49 localhost nova_compute[229085]: + sudo kolla_copy_cacerts
Nov 23 04:29:49 localhost nova_compute[229085]: + [[ ! -n '' ]]
Nov 23 04:29:49 localhost nova_compute[229085]: + . kolla_extend_start
Nov 23 04:29:49 localhost nova_compute[229085]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 04:29:49 localhost nova_compute[229085]: Running command: 'nova-compute'
Nov 23 04:29:49 localhost nova_compute[229085]: + umask 0022
Nov 23 04:29:49 localhost nova_compute[229085]: + exec nova-compute
Nov 23 04:29:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10169 DF PROTO=TCP SPT=49052 DPT=9100 SEQ=1839795987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590AB670000000001030307) 
Nov 23 04:29:50 localhost systemd[1]: tmp-crun.9FsDjt.mount: Deactivated successfully.
Nov 23 04:29:50 localhost python3.9[229205]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.341 229089 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.341 229089 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.342 229089 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.342 229089 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.460 229089 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.480 229089 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.480 229089 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 04:29:51 localhost python3.9[229315]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:51 localhost nova_compute[229085]: 2025-11-23 09:29:51.885 229089 INFO nova.virt.driver [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.008 229089 INFO nova.compute.provider_config [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.075 229089 WARNING nova.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.076 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.077 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.078 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.079 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.079 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.079 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.079 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.080 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.080 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.080 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.081 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.081 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.081 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.081 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.082 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.082 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.082 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.083 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.083 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.083 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.084 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.084 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.084 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] console_host                   = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.085 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.085 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.085 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.086 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.086 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.086 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.087 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.087 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.087 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.088 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.088 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.088 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.089 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.089 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.089 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.089 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.090 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.090 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.090 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.091 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.091 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.091 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.092 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.092 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.093 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.093 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.093 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.094 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.094 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.094 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.094 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.095 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.095 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.095 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.096 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.096 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.096 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.097 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.097 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.097 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.097 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.098 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.098 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.098 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.099 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.099 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.099 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.099 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.100 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.100 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.100 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.101 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.101 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.101 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.102 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.102 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.102 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.103 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.103 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.103 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.103 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.104 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.104 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.104 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.104 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.105 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.105 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.105 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.105 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.105 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.106 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.107 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.107 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.107 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.107 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.107 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.108 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.109 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.109 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.109 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.109 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.110 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.110 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.110 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.110 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.110 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.111 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.112 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.113 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.113 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.113 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.113 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.113 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.114 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.115 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.115 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.115 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.115 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.115 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.116 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.117 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.117 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.117 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.117 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.117 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.118 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.118 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.118 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.118 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.118 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.119 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.120 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.120 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.120 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.120 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.120 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.121 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.122 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.122 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.122 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.122 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.122 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.123 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.123 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.123 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.123 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.123 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.124 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.125 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.125 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.125 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.125 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.125 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.126 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.126 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.126 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.126 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.126 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.127 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.128 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.128 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.128 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.128 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.128 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.129 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.130 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.130 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.130 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.130 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.130 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.131 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.132 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.132 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.132 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.132 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.132 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.133 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.133 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.133 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.133 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.133 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.134 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.135 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.135 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.135 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.135 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.135 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.136 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.136 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.136 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.136 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.136 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.137 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.138 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.138 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.138 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.138 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.138 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.139 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.139 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.139 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.139 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.139 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.140 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.141 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.142 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.143 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.144 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.145 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.146 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.147 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.148 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.149 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.150 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.151 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.152 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.153 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.154 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.155 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.156 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.157 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.158 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.159 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.160 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.161 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.162 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.163 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.164 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.165 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.166 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.167 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.168 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 WARNING oslo_config.cfg [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 04:29:52 localhost nova_compute[229085]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 04:29:52 localhost nova_compute[229085]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 04:29:52 localhost nova_compute[229085]: and ``live_migration_inbound_addr`` respectively.
Nov 23 04:29:52 localhost nova_compute[229085]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.169 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.170 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rbd_secret_uuid        = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.171 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.172 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.173 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.174 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.175 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.176 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.177 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.178 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.179 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.180 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.181 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.182 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.183 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.184 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.185 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.186 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.187 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.188 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.189 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.190 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.191 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.192 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.193 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.194 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.195 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.196 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.197 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.198 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.199 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.200 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.201 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.202 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.203 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.204 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.205 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.206 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.207 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.208 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.209 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.210 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.211 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.212 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.213 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.214 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.215 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.216 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.217 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.218 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.219 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.220 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.221 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.222 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.223 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.224 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.225 229089 DEBUG oslo_service.service [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.226 229089 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.235 229089 INFO nova.virt.node [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.236 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.236 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.237 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.237 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 04:29:52 localhost systemd[1]: Started libvirt QEMU daemon.
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.304 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f67b255bfa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.307 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f67b255bfa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.308 229089 INFO nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 04:29:52 localhost nova_compute[229085]: 2025-11-23 09:29:52.320 229089 DEBUG nova.virt.libvirt.volume.mount [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 04:29:52 localhost python3.9[229426]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10171 DF PROTO=TCP SPT=49052 DPT=9100 SEQ=1839795987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590B75A0000000001030307) 
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.246 229089 INFO nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: 
Nov 23 04:29:53 localhost nova_compute[229085]:  <host>
Nov 23 04:29:53 localhost nova_compute[229085]:    <uuid>94eff25b-7070-4dc8-8cfe-491426a98db3</uuid>
Nov 23 04:29:53 localhost nova_compute[229085]:    <cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:      <arch>x86_64</arch>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model>EPYC-Rome-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <vendor>AMD</vendor>
Nov 23 04:29:53 localhost nova_compute[229085]:      <microcode version='16777317'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <signature family='23' model='49' stepping='0'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='x2apic'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='tsc-deadline'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='osxsave'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='hypervisor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='tsc_adjust'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='spec-ctrl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='stibp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='arch-capabilities'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='cmp_legacy'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='topoext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='virt-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='lbrv'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='tsc-scale'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='vmcb-clean'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='pause-filter'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='pfthreshold'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='svme-addr-chk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='rdctl-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='mds-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature name='pschange-mc-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <pages unit='KiB' size='4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <pages unit='KiB' size='2048'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <pages unit='KiB' size='1048576'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:    <power_management>
Nov 23 04:29:53 localhost nova_compute[229085]:      <suspend_mem/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <suspend_disk/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <suspend_hybrid/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </power_management>
Nov 23 04:29:53 localhost nova_compute[229085]:    <iommu support='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <migration_features>
Nov 23 04:29:53 localhost nova_compute[229085]:      <live/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <uri_transports>
Nov 23 04:29:53 localhost nova_compute[229085]:        <uri_transport>tcp</uri_transport>
Nov 23 04:29:53 localhost nova_compute[229085]:        <uri_transport>rdma</uri_transport>
Nov 23 04:29:53 localhost nova_compute[229085]:      </uri_transports>
Nov 23 04:29:53 localhost nova_compute[229085]:    </migration_features>
Nov 23 04:29:53 localhost nova_compute[229085]:    <topology>
Nov 23 04:29:53 localhost nova_compute[229085]:      <cells num='1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <cell id='0'>
Nov 23 04:29:53 localhost nova_compute[229085]:          <memory unit='KiB'>16116604</memory>
Nov 23 04:29:53 localhost nova_compute[229085]:          <pages unit='KiB' size='4'>4029151</pages>
Nov 23 04:29:53 localhost nova_compute[229085]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 04:29:53 localhost nova_compute[229085]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 04:29:53 localhost nova_compute[229085]:          <distances>
Nov 23 04:29:53 localhost nova_compute[229085]:            <sibling id='0' value='10'/>
Nov 23 04:29:53 localhost nova_compute[229085]:          </distances>
Nov 23 04:29:53 localhost nova_compute[229085]:          <cpus num='8'>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 04:29:53 localhost nova_compute[229085]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 04:29:53 localhost nova_compute[229085]:          </cpus>
Nov 23 04:29:53 localhost nova_compute[229085]:        </cell>
Nov 23 04:29:53 localhost nova_compute[229085]:      </cells>
Nov 23 04:29:53 localhost nova_compute[229085]:    </topology>
Nov 23 04:29:53 localhost nova_compute[229085]:    <cache>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </cache>
Nov 23 04:29:53 localhost nova_compute[229085]:    <secmodel>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model>selinux</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <doi>0</doi>
Nov 23 04:29:53 localhost nova_compute[229085]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 04:29:53 localhost nova_compute[229085]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 04:29:53 localhost nova_compute[229085]:    </secmodel>
Nov 23 04:29:53 localhost nova_compute[229085]:    <secmodel>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model>dac</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <doi>0</doi>
Nov 23 04:29:53 localhost nova_compute[229085]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 04:29:53 localhost nova_compute[229085]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 04:29:53 localhost nova_compute[229085]:    </secmodel>
Nov 23 04:29:53 localhost nova_compute[229085]:  </host>
Nov 23 04:29:53 localhost nova_compute[229085]: 
Nov 23 04:29:53 localhost nova_compute[229085]:  <guest>
Nov 23 04:29:53 localhost nova_compute[229085]:    <os_type>hvm</os_type>
Nov 23 04:29:53 localhost nova_compute[229085]:    <arch name='i686'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <wordsize>32</wordsize>
Nov 23 04:29:53 localhost nova_compute[229085]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <domain type='qemu'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <domain type='kvm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </arch>
Nov 23 04:29:53 localhost nova_compute[229085]:    <features>
Nov 23 04:29:53 localhost nova_compute[229085]:      <pae/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <nonpae/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <acpi default='on' toggle='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <apic default='on' toggle='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <cpuselection/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <deviceboot/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <externalSnapshot/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </features>
Nov 23 04:29:53 localhost nova_compute[229085]:  </guest>
Nov 23 04:29:53 localhost nova_compute[229085]: 
Nov 23 04:29:53 localhost nova_compute[229085]:  <guest>
Nov 23 04:29:53 localhost nova_compute[229085]:    <os_type>hvm</os_type>
Nov 23 04:29:53 localhost nova_compute[229085]:    <arch name='x86_64'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <wordsize>64</wordsize>
Nov 23 04:29:53 localhost nova_compute[229085]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:      <domain type='qemu'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <domain type='kvm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </arch>
Nov 23 04:29:53 localhost nova_compute[229085]:    <features>
Nov 23 04:29:53 localhost nova_compute[229085]:      <acpi default='on' toggle='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <apic default='on' toggle='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <cpuselection/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <deviceboot/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <externalSnapshot/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </features>
Nov 23 04:29:53 localhost nova_compute[229085]:  </guest>
Nov 23 04:29:53 localhost nova_compute[229085]: 
Nov 23 04:29:53 localhost nova_compute[229085]: </capabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: #033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.256 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.275 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 04:29:53 localhost nova_compute[229085]: <domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:29:53 localhost nova_compute[229085]:  <domain>kvm</domain>
Nov 23 04:29:53 localhost nova_compute[229085]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:  <arch>i686</arch>
Nov 23 04:29:53 localhost nova_compute[229085]:  <vcpu max='1024'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <iothreads supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <os supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='firmware'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <loader supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>rom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pflash</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='readonly'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>yes</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='secure'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </loader>
Nov 23 04:29:53 localhost nova_compute[229085]:  </os>
Nov 23 04:29:53 localhost nova_compute[229085]:  <cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='maximum' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='maximumMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-model' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <vendor>AMD</vendor>
Nov 23 04:29:53 localhost nova_compute[229085]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='x2apic'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='stibp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='succor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lbrv'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='custom' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Dhyana-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-128'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-256'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-512'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:  </cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:  <memoryBacking supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='sourceType'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>anonymous</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>memfd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:    </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:  </memoryBacking>
Nov 23 04:29:53 localhost nova_compute[229085]:  <devices>
Nov 23 04:29:53 localhost nova_compute[229085]:    <disk supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='diskDevice'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>disk</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cdrom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>floppy</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>lun</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>fdc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>sata</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </disk>
Nov 23 04:29:53 localhost nova_compute[229085]:    <graphics supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vnc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egl-headless</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </graphics>
Nov 23 04:29:53 localhost nova_compute[229085]:    <video supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='modelType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vga</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cirrus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>none</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>bochs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ramfb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </video>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hostdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='mode'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>subsystem</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='startupPolicy'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>mandatory</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>requisite</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>optional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='subsysType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pci</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='capsType'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='pciBackend'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hostdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <rng supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>random</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </rng>
Nov 23 04:29:53 localhost nova_compute[229085]:    <filesystem supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='driverType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>path</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>handle</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtiofs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </filesystem>
Nov 23 04:29:53 localhost nova_compute[229085]:    <tpm supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-tis</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-crb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emulator</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>external</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendVersion'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>2.0</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </tpm>
Nov 23 04:29:53 localhost nova_compute[229085]:    <redirdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </redirdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <channel supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </channel>
Nov 23 04:29:53 localhost nova_compute[229085]:    <crypto supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </crypto>
Nov 23 04:29:53 localhost nova_compute[229085]:    <interface supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>passt</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </interface>
Nov 23 04:29:53 localhost nova_compute[229085]:    <panic supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>isa</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>hyperv</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </panic>
Nov 23 04:29:53 localhost nova_compute[229085]:    <console supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>null</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dev</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pipe</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stdio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>udp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tcp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu-vdagent</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </console>
Nov 23 04:29:53 localhost nova_compute[229085]:  </devices>
Nov 23 04:29:53 localhost nova_compute[229085]:  <features>
Nov 23 04:29:53 localhost nova_compute[229085]:    <gic supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <vmcoreinfo supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <genid supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backingStoreInput supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backup supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <async-teardown supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <ps2 supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sev supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sgx supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hyperv supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='features'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>relaxed</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vapic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>spinlocks</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vpindex</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>runtime</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>synic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stimer</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reset</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vendor_id</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>frequencies</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reenlightenment</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tlbflush</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ipi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>avic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emsr_bitmap</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>xmm_input</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:        <spinlocks>4095</spinlocks>
Nov 23 04:29:53 localhost nova_compute[229085]:        <stimer_direct>on</stimer_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:29:53 localhost nova_compute[229085]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:29:53 localhost nova_compute[229085]:      </defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hyperv>
Nov 23 04:29:53 localhost nova_compute[229085]:    <launchSecurity supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='sectype'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tdx</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </launchSecurity>
Nov 23 04:29:53 localhost nova_compute[229085]:  </features>
Nov 23 04:29:53 localhost nova_compute[229085]: </domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.284 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 04:29:53 localhost nova_compute[229085]: <domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:29:53 localhost nova_compute[229085]:  <domain>kvm</domain>
Nov 23 04:29:53 localhost nova_compute[229085]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:  <arch>i686</arch>
Nov 23 04:29:53 localhost nova_compute[229085]:  <vcpu max='240'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <iothreads supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <os supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='firmware'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <loader supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>rom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pflash</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='readonly'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>yes</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='secure'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </loader>
Nov 23 04:29:53 localhost nova_compute[229085]:  </os>
Nov 23 04:29:53 localhost nova_compute[229085]:  <cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='maximum' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='maximumMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-model' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <vendor>AMD</vendor>
Nov 23 04:29:53 localhost nova_compute[229085]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='x2apic'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='stibp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='succor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lbrv'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='custom' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Dhyana-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-128'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-256'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-512'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:  </cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:  <memoryBacking supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='sourceType'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>anonymous</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>memfd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:    </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:  </memoryBacking>
Nov 23 04:29:53 localhost nova_compute[229085]:  <devices>
Nov 23 04:29:53 localhost nova_compute[229085]:    <disk supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='diskDevice'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>disk</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cdrom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>floppy</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>lun</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ide</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>fdc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>sata</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </disk>
Nov 23 04:29:53 localhost nova_compute[229085]:    <graphics supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vnc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egl-headless</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </graphics>
Nov 23 04:29:53 localhost nova_compute[229085]:    <video supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='modelType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vga</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cirrus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>none</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>bochs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ramfb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </video>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hostdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='mode'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>subsystem</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='startupPolicy'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>mandatory</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>requisite</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>optional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='subsysType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pci</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='capsType'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='pciBackend'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hostdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <rng supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>random</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </rng>
Nov 23 04:29:53 localhost nova_compute[229085]:    <filesystem supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='driverType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>path</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>handle</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtiofs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </filesystem>
Nov 23 04:29:53 localhost nova_compute[229085]:    <tpm supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-tis</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-crb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emulator</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>external</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendVersion'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>2.0</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </tpm>
Nov 23 04:29:53 localhost nova_compute[229085]:    <redirdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </redirdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <channel supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </channel>
Nov 23 04:29:53 localhost nova_compute[229085]:    <crypto supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </crypto>
Nov 23 04:29:53 localhost nova_compute[229085]:    <interface supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>passt</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </interface>
Nov 23 04:29:53 localhost nova_compute[229085]:    <panic supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>isa</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>hyperv</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </panic>
Nov 23 04:29:53 localhost nova_compute[229085]:    <console supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>null</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dev</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pipe</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stdio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>udp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tcp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu-vdagent</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </console>
Nov 23 04:29:53 localhost nova_compute[229085]:  </devices>
Nov 23 04:29:53 localhost nova_compute[229085]:  <features>
Nov 23 04:29:53 localhost nova_compute[229085]:    <gic supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <vmcoreinfo supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <genid supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backingStoreInput supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backup supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <async-teardown supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <ps2 supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sev supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sgx supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hyperv supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='features'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>relaxed</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vapic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>spinlocks</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vpindex</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>runtime</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>synic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stimer</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reset</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vendor_id</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>frequencies</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reenlightenment</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tlbflush</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ipi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>avic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emsr_bitmap</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>xmm_input</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:        <spinlocks>4095</spinlocks>
Nov 23 04:29:53 localhost nova_compute[229085]:        <stimer_direct>on</stimer_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:29:53 localhost nova_compute[229085]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:29:53 localhost nova_compute[229085]:      </defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hyperv>
Nov 23 04:29:53 localhost nova_compute[229085]:    <launchSecurity supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='sectype'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tdx</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </launchSecurity>
Nov 23 04:29:53 localhost nova_compute[229085]:  </features>
Nov 23 04:29:53 localhost nova_compute[229085]: </domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.330 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.336 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 04:29:53 localhost nova_compute[229085]: <domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:29:53 localhost nova_compute[229085]:  <domain>kvm</domain>
Nov 23 04:29:53 localhost nova_compute[229085]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:  <arch>x86_64</arch>
Nov 23 04:29:53 localhost nova_compute[229085]:  <vcpu max='1024'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <iothreads supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <os supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='firmware'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>efi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:    </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    <loader supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>rom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pflash</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='readonly'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>yes</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='secure'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>yes</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </loader>
Nov 23 04:29:53 localhost nova_compute[229085]:  </os>
Nov 23 04:29:53 localhost nova_compute[229085]:  <cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='maximum' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='maximumMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-model' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <vendor>AMD</vendor>
Nov 23 04:29:53 localhost nova_compute[229085]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='x2apic'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='stibp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='succor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lbrv'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='custom' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Dhyana-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-128'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-256'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-512'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:  </cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:  <memoryBacking supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='sourceType'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>anonymous</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>memfd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:    </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:  </memoryBacking>
Nov 23 04:29:53 localhost nova_compute[229085]:  <devices>
Nov 23 04:29:53 localhost nova_compute[229085]:    <disk supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='diskDevice'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>disk</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cdrom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>floppy</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>lun</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>fdc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>sata</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </disk>
Nov 23 04:29:53 localhost nova_compute[229085]:    <graphics supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vnc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egl-headless</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </graphics>
Nov 23 04:29:53 localhost nova_compute[229085]:    <video supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='modelType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vga</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cirrus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>none</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>bochs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ramfb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </video>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hostdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='mode'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>subsystem</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='startupPolicy'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>mandatory</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>requisite</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>optional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='subsysType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pci</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='capsType'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='pciBackend'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hostdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <rng supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>random</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </rng>
Nov 23 04:29:53 localhost nova_compute[229085]:    <filesystem supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='driverType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>path</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>handle</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtiofs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </filesystem>
Nov 23 04:29:53 localhost nova_compute[229085]:    <tpm supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-tis</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-crb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emulator</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>external</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendVersion'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>2.0</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </tpm>
Nov 23 04:29:53 localhost nova_compute[229085]:    <redirdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </redirdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <channel supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </channel>
Nov 23 04:29:53 localhost nova_compute[229085]:    <crypto supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </crypto>
Nov 23 04:29:53 localhost nova_compute[229085]:    <interface supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>passt</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </interface>
Nov 23 04:29:53 localhost nova_compute[229085]:    <panic supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>isa</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>hyperv</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </panic>
Nov 23 04:29:53 localhost nova_compute[229085]:    <console supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>null</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dev</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pipe</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stdio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>udp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tcp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu-vdagent</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </console>
Nov 23 04:29:53 localhost nova_compute[229085]:  </devices>
Nov 23 04:29:53 localhost nova_compute[229085]:  <features>
Nov 23 04:29:53 localhost nova_compute[229085]:    <gic supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <vmcoreinfo supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <genid supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backingStoreInput supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backup supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <async-teardown supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <ps2 supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sev supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sgx supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hyperv supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='features'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>relaxed</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vapic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>spinlocks</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vpindex</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>runtime</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>synic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stimer</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reset</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vendor_id</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>frequencies</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reenlightenment</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tlbflush</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ipi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>avic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emsr_bitmap</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>xmm_input</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:        <spinlocks>4095</spinlocks>
Nov 23 04:29:53 localhost nova_compute[229085]:        <stimer_direct>on</stimer_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:29:53 localhost nova_compute[229085]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:29:53 localhost nova_compute[229085]:      </defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hyperv>
Nov 23 04:29:53 localhost nova_compute[229085]:    <launchSecurity supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='sectype'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tdx</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </launchSecurity>
Nov 23 04:29:53 localhost nova_compute[229085]:  </features>
Nov 23 04:29:53 localhost nova_compute[229085]: </domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.394 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 04:29:53 localhost nova_compute[229085]: <domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:29:53 localhost nova_compute[229085]:  <domain>kvm</domain>
Nov 23 04:29:53 localhost nova_compute[229085]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:29:53 localhost nova_compute[229085]:  <arch>x86_64</arch>
Nov 23 04:29:53 localhost nova_compute[229085]:  <vcpu max='240'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <iothreads supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:  <os supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='firmware'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <loader supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>rom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pflash</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='readonly'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>yes</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='secure'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>no</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </loader>
Nov 23 04:29:53 localhost nova_compute[229085]:  </os>
Nov 23 04:29:53 localhost nova_compute[229085]:  <cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='maximum' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='maximumMigratable'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>on</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>off</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='host-model' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <vendor>AMD</vendor>
Nov 23 04:29:53 localhost nova_compute[229085]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='x2apic'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='stibp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='succor'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lbrv'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:    <mode name='custom' supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Broadwell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Cooperlake-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Denverton-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Dhyana-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='auto-ibrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amd-psfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='no-nested-data-bp'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='null-sel-clr-base'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='stibp-always-on'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='EPYC-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-128'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-256'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx10-512'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='prefetchiti'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Haswell-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='IvyBridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='KnightsMill-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4fmaps'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-4vnniw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512er'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512pf'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fma4'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tbm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xop'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='amx-tile'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-bf16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-fp16'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bitalg'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vbmi2'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrc'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fzrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='la57'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='taa-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='tsx-ldtrk'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xfd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='SierraForest-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ifma'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-ne-convert'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx-vnni-int8'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='bus-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cmpccxadd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fbsdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='fsrs'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ibrs-all'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mcdt-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pbrsb-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='psdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='serialize'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vaes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='vpclmulqdq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='hle'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='rtm'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512bw'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512cd'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512dq'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512f'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='avx512vl'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='invpcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pcid'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='pku'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='mpx'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v2'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v3'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='core-capability'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='split-lock-detect'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='Snowridge-v4'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='cldemote'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='erms'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='gfni'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdir64b'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='movdiri'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='xsaves'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='athlon-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='core2duo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='coreduo-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='n270-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='ss'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <blockers model='phenom-v1'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnow'/>
Nov 23 04:29:53 localhost nova_compute[229085]:        <feature name='3dnowext'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      </blockers>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:29:53 localhost nova_compute[229085]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:29:53 localhost nova_compute[229085]:    </mode>
Nov 23 04:29:53 localhost nova_compute[229085]:  </cpu>
Nov 23 04:29:53 localhost nova_compute[229085]:  <memoryBacking supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:    <enum name='sourceType'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>anonymous</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      <value>memfd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:    </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:  </memoryBacking>
Nov 23 04:29:53 localhost nova_compute[229085]:  <devices>
Nov 23 04:29:53 localhost nova_compute[229085]:    <disk supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='diskDevice'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>disk</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cdrom</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>floppy</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>lun</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ide</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>fdc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>sata</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </disk>
Nov 23 04:29:53 localhost nova_compute[229085]:    <graphics supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vnc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egl-headless</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </graphics>
Nov 23 04:29:53 localhost nova_compute[229085]:    <video supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='modelType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vga</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>cirrus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>none</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>bochs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ramfb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </video>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hostdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='mode'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>subsystem</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='startupPolicy'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>mandatory</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>requisite</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>optional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='subsysType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pci</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>scsi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='capsType'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='pciBackend'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hostdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <rng supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtio-non-transitional</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>random</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>egd</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </rng>
Nov 23 04:29:53 localhost nova_compute[229085]:    <filesystem supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='driverType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>path</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>handle</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>virtiofs</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </filesystem>
Nov 23 04:29:53 localhost nova_compute[229085]:    <tpm supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-tis</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tpm-crb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emulator</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>external</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendVersion'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>2.0</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </tpm>
Nov 23 04:29:53 localhost nova_compute[229085]:    <redirdev supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='bus'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>usb</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </redirdev>
Nov 23 04:29:53 localhost nova_compute[229085]:    <channel supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </channel>
Nov 23 04:29:53 localhost nova_compute[229085]:    <crypto supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'/>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendModel'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>builtin</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </crypto>
Nov 23 04:29:53 localhost nova_compute[229085]:    <interface supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='backendType'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>default</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>passt</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </interface>
Nov 23 04:29:53 localhost nova_compute[229085]:    <panic supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='model'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>isa</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>hyperv</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </panic>
Nov 23 04:29:53 localhost nova_compute[229085]:    <console supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='type'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>null</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vc</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pty</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dev</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>file</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>pipe</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stdio</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>udp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tcp</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>unix</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>qemu-vdagent</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>dbus</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </console>
Nov 23 04:29:53 localhost nova_compute[229085]:  </devices>
Nov 23 04:29:53 localhost nova_compute[229085]:  <features>
Nov 23 04:29:53 localhost nova_compute[229085]:    <gic supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <vmcoreinfo supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <genid supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backingStoreInput supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <backup supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <async-teardown supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <ps2 supported='yes'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sev supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <sgx supported='no'/>
Nov 23 04:29:53 localhost nova_compute[229085]:    <hyperv supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='features'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>relaxed</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vapic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>spinlocks</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vpindex</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>runtime</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>synic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>stimer</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reset</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>vendor_id</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>frequencies</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>reenlightenment</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tlbflush</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>ipi</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>avic</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>emsr_bitmap</value>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>xmm_input</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:      <defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:        <spinlocks>4095</spinlocks>
Nov 23 04:29:53 localhost nova_compute[229085]:        <stimer_direct>on</stimer_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:29:53 localhost nova_compute[229085]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:29:53 localhost nova_compute[229085]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:29:53 localhost nova_compute[229085]:      </defaults>
Nov 23 04:29:53 localhost nova_compute[229085]:    </hyperv>
Nov 23 04:29:53 localhost nova_compute[229085]:    <launchSecurity supported='yes'>
Nov 23 04:29:53 localhost nova_compute[229085]:      <enum name='sectype'>
Nov 23 04:29:53 localhost nova_compute[229085]:        <value>tdx</value>
Nov 23 04:29:53 localhost nova_compute[229085]:      </enum>
Nov 23 04:29:53 localhost nova_compute[229085]:    </launchSecurity>
Nov 23 04:29:53 localhost nova_compute[229085]:  </features>
Nov 23 04:29:53 localhost nova_compute[229085]: </domainCapabilities>
Nov 23 04:29:53 localhost nova_compute[229085]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.446 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.446 229089 INFO nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Secure Boot support detected#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.448 229089 INFO nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.449 229089 INFO nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.460 229089 DEBUG nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.474 229089 INFO nova.virt.node [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.486 229089 DEBUG nova.compute.manager [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Verified node 1df367d3-e79d-4d54-9b3c-f6af3beffa8b matches my host np0005532586.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 23 04:29:53 localhost nova_compute[229085]: 2025-11-23 09:29:53.503 229089 INFO nova.compute.manager [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.139 229089 INFO nova.service [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Updating service version for nova-compute on np0005532586.localdomain from 57 to 66#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.177 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.177 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.178 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.178 229089 DEBUG nova.compute.resource_tracker [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.179 229089 DEBUG oslo_concurrency.processutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.632 229089 DEBUG oslo_concurrency.processutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:29:54 localhost systemd[1]: Started libvirt nodedev daemon.
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.983 229089 WARNING nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.984 229089 DEBUG nova.compute.resource_tracker [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13585MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.985 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:29:54 localhost nova_compute[229085]: 2025-11-23 09:29:54.985 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.122 229089 DEBUG nova.compute.resource_tracker [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.123 229089 DEBUG nova.compute.resource_tracker [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.179 229089 DEBUG nova.scheduler.client.report [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.197 229089 DEBUG nova.scheduler.client.report [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.197 229089 DEBUG nova.compute.provider_tree [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.210 229089 DEBUG nova.scheduler.client.report [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.234 229089 DEBUG nova.scheduler.client.report [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_SSE4A,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.255 229089 DEBUG oslo_concurrency.processutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:29:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35002 DF PROTO=TCP SPT=50046 DPT=9105 SEQ=37278830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590BFD90000000001030307) 
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.704 229089 DEBUG oslo_concurrency.processutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.708 229089 DEBUG nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 23 04:29:55 localhost nova_compute[229085]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.709 229089 INFO nova.virt.libvirt.host [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.710 229089 DEBUG nova.compute.provider_tree [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.710 229089 DEBUG nova.virt.libvirt.driver [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.726 229089 DEBUG nova.scheduler.client.report [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.801 229089 DEBUG nova.compute.provider_tree [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Updating resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.833 229089 DEBUG nova.compute.resource_tracker [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.833 229089 DEBUG oslo_concurrency.lockutils [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.833 229089 DEBUG nova.service [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.857 229089 DEBUG nova.service [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 23 04:29:55 localhost nova_compute[229085]: 2025-11-23 09:29:55.857 229089 DEBUG nova.servicegroup.drivers.db [None req-5e32944f-d6bb-43ea-9401-9305fcd9ceab - - - - - -] DB_Driver: join new ServiceGroup member np0005532586.localdomain to the compute group, service = <Service: host=np0005532586.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 23 04:29:55 localhost sshd[229874]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:29:55 localhost python3.9[229871]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 04:29:56 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 121.3 (404 of 333 items), suggesting rotation.
Nov 23 04:29:56 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:29:56 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:29:56 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:29:58 localhost podman[229936]: 2025-11-23 09:29:58.191422741 +0000 UTC m=+0.087081111 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:29:58 localhost podman[229936]: 2025-11-23 09:29:58.205881574 +0000 UTC m=+0.101539924 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:29:58 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:29:58 localhost python3.9[230028]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:29:58 localhost systemd[1]: Stopping nova_compute container...
Nov 23 04:29:58 localhost systemd[1]: tmp-crun.h0CWNz.mount: Deactivated successfully.
Nov 23 04:29:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35003 DF PROTO=TCP SPT=50046 DPT=9105 SEQ=37278830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590CF990000000001030307) 
Nov 23 04:29:59 localhost sshd[230045]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:29:59 localhost nova_compute[229085]: 2025-11-23 09:29:59.972 229089 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m
Nov 23 04:29:59 localhost nova_compute[229085]: 2025-11-23 09:29:59.975 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:29:59 localhost nova_compute[229085]: 2025-11-23 09:29:59.976 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:29:59 localhost nova_compute[229085]: 2025-11-23 09:29:59.976 229089 DEBUG oslo_concurrency.lockutils [None req-abe6f92c-2d08-4d25-bc58-60f280240d65 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:30:00 localhost systemd[1]: libpod-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e.scope: Deactivated successfully.
Nov 23 04:30:00 localhost journal[229448]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 23 04:30:00 localhost journal[229448]: hostname: np0005532586.localdomain
Nov 23 04:30:00 localhost journal[229448]: End of file while reading data: Input/output error
Nov 23 04:30:00 localhost systemd[1]: libpod-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e.scope: Consumed 3.837s CPU time.
Nov 23 04:30:00 localhost podman[230032]: 2025-11-23 09:30:00.332575404 +0000 UTC m=+1.613556899 container died 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Nov 23 04:30:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e-userdata-shm.mount: Deactivated successfully.
Nov 23 04:30:00 localhost podman[230032]: 2025-11-23 09:30:00.377902002 +0000 UTC m=+1.658883497 container cleanup 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:30:00 localhost podman[230032]: nova_compute
Nov 23 04:30:00 localhost podman[230070]: error opening file `/run/crun/3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e/status`: No such file or directory
Nov 23 04:30:00 localhost podman[230058]: 2025-11-23 09:30:00.454374365 +0000 UTC m=+0.051035614 container cleanup 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:30:00 localhost podman[230058]: nova_compute
Nov 23 04:30:00 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 23 04:30:00 localhost systemd[1]: Stopped nova_compute container.
Nov 23 04:30:00 localhost systemd[1]: Starting nova_compute container...
Nov 23 04:30:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:30:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:00 localhost podman[230072]: 2025-11-23 09:30:00.59139656 +0000 UTC m=+0.108119692 container init 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:30:00 localhost podman[230072]: 2025-11-23 09:30:00.600952799 +0000 UTC m=+0.117675941 container start 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:30:00 localhost podman[230072]: nova_compute
Nov 23 04:30:00 localhost nova_compute[230084]: + sudo -E kolla_set_configs
Nov 23 04:30:00 localhost systemd[1]: Started nova_compute container.
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Validating config file
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying service configuration files
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /etc/ceph
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Creating directory /etc/ceph
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Writing out command to execute
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:30:00 localhost nova_compute[230084]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:30:00 localhost nova_compute[230084]: ++ cat /run_command
Nov 23 04:30:00 localhost nova_compute[230084]: + CMD=nova-compute
Nov 23 04:30:00 localhost nova_compute[230084]: + ARGS=
Nov 23 04:30:00 localhost nova_compute[230084]: + sudo kolla_copy_cacerts
Nov 23 04:30:00 localhost nova_compute[230084]: + [[ ! -n '' ]]
Nov 23 04:30:00 localhost nova_compute[230084]: + . kolla_extend_start
Nov 23 04:30:00 localhost nova_compute[230084]: Running command: 'nova-compute'
Nov 23 04:30:00 localhost nova_compute[230084]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 04:30:00 localhost nova_compute[230084]: + umask 0022
Nov 23 04:30:00 localhost nova_compute[230084]: + exec nova-compute
Nov 23 04:30:01 localhost python3.9[230206]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 04:30:01 localhost systemd[1]: Started libpod-conmon-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope.
Nov 23 04:30:01 localhost systemd[1]: Started libcrun container.
Nov 23 04:30:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 23 04:30:01 localhost podman[230232]: 2025-11-23 09:30:01.981934621 +0000 UTC m=+0.099670273 container init 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 23 04:30:01 localhost podman[230232]: 2025-11-23 09:30:01.988724045 +0000 UTC m=+0.106459697 container start 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 04:30:01 localhost python3.9[230206]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Applying nova statedir ownership
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4143dbbec5b08621aa3c8eb364f8a7d3e97604e18b7ed41c4bab0da11ed561fd
Nov 23 04:30:02 localhost nova_compute_init[230252]: INFO:nova_statedir:Nova statedir ownership complete
Nov 23 04:30:02 localhost systemd[1]: libpod-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope: Deactivated successfully.
Nov 23 04:30:02 localhost podman[230253]: 2025-11-23 09:30:02.059623417 +0000 UTC m=+0.054933420 container died 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0)
Nov 23 04:30:02 localhost podman[230267]: 2025-11-23 09:30:02.128291199 +0000 UTC m=+0.066445852 container cleanup 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251118, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 23 04:30:02 localhost systemd[1]: libpod-conmon-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope: Deactivated successfully.
Nov 23 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay-83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7-merged.mount: Deactivated successfully.
Nov 23 04:30:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec-userdata-shm.mount: Deactivated successfully.
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.403 230088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.403 230088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.403 230088 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.404 230088 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.516 230088 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.538 230088 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.538 230088 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 04:30:02 localhost systemd[1]: session-53.scope: Deactivated successfully.
Nov 23 04:30:02 localhost systemd[1]: session-53.scope: Consumed 2min 16.203s CPU time.
Nov 23 04:30:02 localhost systemd-logind[761]: Session 53 logged out. Waiting for processes to exit.
Nov 23 04:30:02 localhost systemd-logind[761]: Removed session 53.
Nov 23 04:30:02 localhost nova_compute[230084]: 2025-11-23 09:30:02.912 230088 INFO nova.virt.driver [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.022 230088 INFO nova.compute.provider_config [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.027 230088 WARNING nova.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.028 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.028 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.028 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.028 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.029 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.030 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] console_host                   = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.031 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.032 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.033 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.034 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.035 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.036 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.037 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.038 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.039 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.040 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.041 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.042 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.043 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.044 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.044 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.044 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.044 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.044 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.045 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.046 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.047 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.048 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.049 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.050 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.051 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.052 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.053 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.054 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.055 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.056 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.057 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.058 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.059 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.060 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.061 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.062 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.063 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.064 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.065 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.066 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.067 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.068 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.069 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.070 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.071 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.072 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.073 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.074 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.075 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.076 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.077 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.078 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.079 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.080 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.081 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.081 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.081 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.081 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.081 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.082 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.083 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.084 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.085 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.086 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.087 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.088 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.089 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.090 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.091 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.092 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.093 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.094 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.095 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.096 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.097 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.098 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.099 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 WARNING oslo_config.cfg [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 04:30:03 localhost nova_compute[230084]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 04:30:03 localhost nova_compute[230084]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 04:30:03 localhost nova_compute[230084]: and ``live_migration_inbound_addr`` respectively.
Nov 23 04:30:03 localhost nova_compute[230084]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.100 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.101 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.102 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rbd_secret_uuid        = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.103 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.104 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.105 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.105 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.105 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.105 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.105 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.106 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.107 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.108 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.108 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.108 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.108 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.108 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.109 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.110 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.110 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.110 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.110 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.110 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.111 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.112 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.113 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.114 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.115 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.116 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.117 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.118 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.119 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.120 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.121 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.122 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.123 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.124 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.125 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.126 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.127 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.128 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.129 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.130 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.131 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.132 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.133 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.134 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.135 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.136 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.137 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.138 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.139 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.140 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.141 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.142 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.143 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.143 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.143 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.143 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.143 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.144 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.145 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.146 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.147 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.148 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.149 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.150 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.151 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.152 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.153 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.154 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.155 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.156 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.157 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.158 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.159 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.160 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.161 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.162 230088 DEBUG oslo_service.service [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.163 230088 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.173 230088 INFO nova.virt.node [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.173 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.174 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.174 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.175 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.185 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9c04a439d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.187 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9c04a439d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.188 230088 INFO nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.193 230088 INFO nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: 
Nov 23 04:30:03 localhost nova_compute[230084]:  <host>
Nov 23 04:30:03 localhost nova_compute[230084]:    <uuid>94eff25b-7070-4dc8-8cfe-491426a98db3</uuid>
Nov 23 04:30:03 localhost nova_compute[230084]:    <cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:      <arch>x86_64</arch>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model>EPYC-Rome-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <vendor>AMD</vendor>
Nov 23 04:30:03 localhost nova_compute[230084]:      <microcode version='16777317'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <signature family='23' model='49' stepping='0'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='x2apic'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='tsc-deadline'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='osxsave'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='hypervisor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='tsc_adjust'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='spec-ctrl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='stibp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='arch-capabilities'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='cmp_legacy'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='topoext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='virt-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='lbrv'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='tsc-scale'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='vmcb-clean'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='pause-filter'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='pfthreshold'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='svme-addr-chk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='rdctl-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='mds-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature name='pschange-mc-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <pages unit='KiB' size='4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <pages unit='KiB' size='2048'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <pages unit='KiB' size='1048576'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:    <power_management>
Nov 23 04:30:03 localhost nova_compute[230084]:      <suspend_mem/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <suspend_disk/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <suspend_hybrid/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </power_management>
Nov 23 04:30:03 localhost nova_compute[230084]:    <iommu support='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <migration_features>
Nov 23 04:30:03 localhost nova_compute[230084]:      <live/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <uri_transports>
Nov 23 04:30:03 localhost nova_compute[230084]:        <uri_transport>tcp</uri_transport>
Nov 23 04:30:03 localhost nova_compute[230084]:        <uri_transport>rdma</uri_transport>
Nov 23 04:30:03 localhost nova_compute[230084]:      </uri_transports>
Nov 23 04:30:03 localhost nova_compute[230084]:    </migration_features>
Nov 23 04:30:03 localhost nova_compute[230084]:    <topology>
Nov 23 04:30:03 localhost nova_compute[230084]:      <cells num='1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <cell id='0'>
Nov 23 04:30:03 localhost nova_compute[230084]:          <memory unit='KiB'>16116604</memory>
Nov 23 04:30:03 localhost nova_compute[230084]:          <pages unit='KiB' size='4'>4029151</pages>
Nov 23 04:30:03 localhost nova_compute[230084]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 04:30:03 localhost nova_compute[230084]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 04:30:03 localhost nova_compute[230084]:          <distances>
Nov 23 04:30:03 localhost nova_compute[230084]:            <sibling id='0' value='10'/>
Nov 23 04:30:03 localhost nova_compute[230084]:          </distances>
Nov 23 04:30:03 localhost nova_compute[230084]:          <cpus num='8'>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 04:30:03 localhost nova_compute[230084]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 04:30:03 localhost nova_compute[230084]:          </cpus>
Nov 23 04:30:03 localhost nova_compute[230084]:        </cell>
Nov 23 04:30:03 localhost nova_compute[230084]:      </cells>
Nov 23 04:30:03 localhost nova_compute[230084]:    </topology>
Nov 23 04:30:03 localhost nova_compute[230084]:    <cache>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </cache>
Nov 23 04:30:03 localhost nova_compute[230084]:    <secmodel>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model>selinux</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <doi>0</doi>
Nov 23 04:30:03 localhost nova_compute[230084]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 04:30:03 localhost nova_compute[230084]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 04:30:03 localhost nova_compute[230084]:    </secmodel>
Nov 23 04:30:03 localhost nova_compute[230084]:    <secmodel>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model>dac</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <doi>0</doi>
Nov 23 04:30:03 localhost nova_compute[230084]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 04:30:03 localhost nova_compute[230084]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 04:30:03 localhost nova_compute[230084]:    </secmodel>
Nov 23 04:30:03 localhost nova_compute[230084]:  </host>
Nov 23 04:30:03 localhost nova_compute[230084]: 
Nov 23 04:30:03 localhost nova_compute[230084]:  <guest>
Nov 23 04:30:03 localhost nova_compute[230084]:    <os_type>hvm</os_type>
Nov 23 04:30:03 localhost nova_compute[230084]:    <arch name='i686'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <wordsize>32</wordsize>
Nov 23 04:30:03 localhost nova_compute[230084]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <domain type='qemu'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <domain type='kvm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </arch>
Nov 23 04:30:03 localhost nova_compute[230084]:    <features>
Nov 23 04:30:03 localhost nova_compute[230084]:      <pae/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <nonpae/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <acpi default='on' toggle='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <apic default='on' toggle='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <cpuselection/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <deviceboot/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <externalSnapshot/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </features>
Nov 23 04:30:03 localhost nova_compute[230084]:  </guest>
Nov 23 04:30:03 localhost nova_compute[230084]: 
Nov 23 04:30:03 localhost nova_compute[230084]:  <guest>
Nov 23 04:30:03 localhost nova_compute[230084]:    <os_type>hvm</os_type>
Nov 23 04:30:03 localhost nova_compute[230084]:    <arch name='x86_64'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <wordsize>64</wordsize>
Nov 23 04:30:03 localhost nova_compute[230084]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:      <domain type='qemu'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <domain type='kvm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </arch>
Nov 23 04:30:03 localhost nova_compute[230084]:    <features>
Nov 23 04:30:03 localhost nova_compute[230084]:      <acpi default='on' toggle='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <apic default='on' toggle='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <cpuselection/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <deviceboot/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <externalSnapshot/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </features>
Nov 23 04:30:03 localhost nova_compute[230084]:  </guest>
Nov 23 04:30:03 localhost nova_compute[230084]: 
Nov 23 04:30:03 localhost nova_compute[230084]: </capabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: #033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.199 230088 DEBUG nova.virt.libvirt.volume.mount [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.200 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.204 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 04:30:03 localhost nova_compute[230084]: <domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:30:03 localhost nova_compute[230084]:  <domain>kvm</domain>
Nov 23 04:30:03 localhost nova_compute[230084]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:  <arch>i686</arch>
Nov 23 04:30:03 localhost nova_compute[230084]:  <vcpu max='240'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <iothreads supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <os supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='firmware'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <loader supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>rom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pflash</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='readonly'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>yes</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='secure'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </loader>
Nov 23 04:30:03 localhost nova_compute[230084]:  </os>
Nov 23 04:30:03 localhost nova_compute[230084]:  <cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='maximum' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='maximumMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-model' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <vendor>AMD</vendor>
Nov 23 04:30:03 localhost nova_compute[230084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='x2apic'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='stibp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='succor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lbrv'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='custom' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Dhyana-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-128'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-256'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-512'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:  </cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:  <memoryBacking supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='sourceType'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>anonymous</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>memfd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:    </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:  </memoryBacking>
Nov 23 04:30:03 localhost nova_compute[230084]:  <devices>
Nov 23 04:30:03 localhost nova_compute[230084]:    <disk supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='diskDevice'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>disk</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cdrom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>floppy</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>lun</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ide</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>fdc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>sata</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </disk>
Nov 23 04:30:03 localhost nova_compute[230084]:    <graphics supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vnc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egl-headless</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </graphics>
Nov 23 04:30:03 localhost nova_compute[230084]:    <video supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='modelType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vga</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cirrus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>none</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>bochs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ramfb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </video>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hostdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='mode'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>subsystem</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='startupPolicy'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>mandatory</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>requisite</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>optional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='subsysType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pci</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='capsType'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='pciBackend'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hostdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <rng supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>random</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </rng>
Nov 23 04:30:03 localhost nova_compute[230084]:    <filesystem supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='driverType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>path</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>handle</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtiofs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </filesystem>
Nov 23 04:30:03 localhost nova_compute[230084]:    <tpm supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-tis</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-crb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emulator</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>external</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendVersion'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>2.0</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </tpm>
Nov 23 04:30:03 localhost nova_compute[230084]:    <redirdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </redirdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <channel supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </channel>
Nov 23 04:30:03 localhost nova_compute[230084]:    <crypto supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </crypto>
Nov 23 04:30:03 localhost nova_compute[230084]:    <interface supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>passt</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </interface>
Nov 23 04:30:03 localhost nova_compute[230084]:    <panic supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>isa</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>hyperv</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </panic>
Nov 23 04:30:03 localhost nova_compute[230084]:    <console supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>null</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dev</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pipe</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stdio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>udp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tcp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu-vdagent</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </console>
Nov 23 04:30:03 localhost nova_compute[230084]:  </devices>
Nov 23 04:30:03 localhost nova_compute[230084]:  <features>
Nov 23 04:30:03 localhost nova_compute[230084]:    <gic supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <vmcoreinfo supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <genid supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backingStoreInput supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backup supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <async-teardown supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <ps2 supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sev supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sgx supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hyperv supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='features'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>relaxed</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vapic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>spinlocks</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vpindex</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>runtime</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>synic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stimer</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reset</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vendor_id</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>frequencies</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reenlightenment</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tlbflush</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ipi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>avic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emsr_bitmap</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>xmm_input</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:        <spinlocks>4095</spinlocks>
Nov 23 04:30:03 localhost nova_compute[230084]:        <stimer_direct>on</stimer_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:30:03 localhost nova_compute[230084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:30:03 localhost nova_compute[230084]:      </defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hyperv>
Nov 23 04:30:03 localhost nova_compute[230084]:    <launchSecurity supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='sectype'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tdx</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </launchSecurity>
Nov 23 04:30:03 localhost nova_compute[230084]:  </features>
Nov 23 04:30:03 localhost nova_compute[230084]: </domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.211 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 04:30:03 localhost nova_compute[230084]: <domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:30:03 localhost nova_compute[230084]:  <domain>kvm</domain>
Nov 23 04:30:03 localhost nova_compute[230084]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:  <arch>i686</arch>
Nov 23 04:30:03 localhost nova_compute[230084]:  <vcpu max='1024'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <iothreads supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <os supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='firmware'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <loader supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>rom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pflash</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='readonly'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>yes</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='secure'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </loader>
Nov 23 04:30:03 localhost nova_compute[230084]:  </os>
Nov 23 04:30:03 localhost nova_compute[230084]:  <cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='maximum' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='maximumMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-model' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <vendor>AMD</vendor>
Nov 23 04:30:03 localhost nova_compute[230084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='x2apic'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='stibp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='succor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lbrv'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='custom' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Dhyana-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-128'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-256'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-512'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:  </cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:  <memoryBacking supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='sourceType'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>anonymous</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>memfd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:    </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:  </memoryBacking>
Nov 23 04:30:03 localhost nova_compute[230084]:  <devices>
Nov 23 04:30:03 localhost nova_compute[230084]:    <disk supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='diskDevice'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>disk</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cdrom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>floppy</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>lun</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>fdc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>sata</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </disk>
Nov 23 04:30:03 localhost nova_compute[230084]:    <graphics supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vnc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egl-headless</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </graphics>
Nov 23 04:30:03 localhost nova_compute[230084]:    <video supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='modelType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vga</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cirrus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>none</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>bochs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ramfb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </video>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hostdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='mode'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>subsystem</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='startupPolicy'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>mandatory</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>requisite</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>optional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='subsysType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pci</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='capsType'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='pciBackend'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hostdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <rng supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>random</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </rng>
Nov 23 04:30:03 localhost nova_compute[230084]:    <filesystem supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='driverType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>path</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>handle</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtiofs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </filesystem>
Nov 23 04:30:03 localhost nova_compute[230084]:    <tpm supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-tis</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-crb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emulator</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>external</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendVersion'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>2.0</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </tpm>
Nov 23 04:30:03 localhost nova_compute[230084]:    <redirdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </redirdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <channel supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </channel>
Nov 23 04:30:03 localhost nova_compute[230084]:    <crypto supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </crypto>
Nov 23 04:30:03 localhost nova_compute[230084]:    <interface supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>passt</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </interface>
Nov 23 04:30:03 localhost nova_compute[230084]:    <panic supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>isa</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>hyperv</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </panic>
Nov 23 04:30:03 localhost nova_compute[230084]:    <console supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>null</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dev</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pipe</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stdio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>udp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tcp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu-vdagent</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </console>
Nov 23 04:30:03 localhost nova_compute[230084]:  </devices>
Nov 23 04:30:03 localhost nova_compute[230084]:  <features>
Nov 23 04:30:03 localhost nova_compute[230084]:    <gic supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <vmcoreinfo supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <genid supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backingStoreInput supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backup supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <async-teardown supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <ps2 supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sev supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sgx supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hyperv supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='features'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>relaxed</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vapic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>spinlocks</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vpindex</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>runtime</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>synic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stimer</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reset</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vendor_id</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>frequencies</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reenlightenment</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tlbflush</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ipi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>avic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emsr_bitmap</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>xmm_input</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:        <spinlocks>4095</spinlocks>
Nov 23 04:30:03 localhost nova_compute[230084]:        <stimer_direct>on</stimer_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:30:03 localhost nova_compute[230084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:30:03 localhost nova_compute[230084]:      </defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hyperv>
Nov 23 04:30:03 localhost nova_compute[230084]:    <launchSecurity supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='sectype'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tdx</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </launchSecurity>
Nov 23 04:30:03 localhost nova_compute[230084]:  </features>
Nov 23 04:30:03 localhost nova_compute[230084]: </domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.250 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.255 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 04:30:03 localhost nova_compute[230084]: <domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:30:03 localhost nova_compute[230084]:  <domain>kvm</domain>
Nov 23 04:30:03 localhost nova_compute[230084]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:  <arch>x86_64</arch>
Nov 23 04:30:03 localhost nova_compute[230084]:  <vcpu max='1024'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <iothreads supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <os supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='firmware'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>efi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:    </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    <loader supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>rom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pflash</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='readonly'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>yes</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='secure'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>yes</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </loader>
Nov 23 04:30:03 localhost nova_compute[230084]:  </os>
Nov 23 04:30:03 localhost nova_compute[230084]:  <cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='maximum' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='maximumMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-model' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <vendor>AMD</vendor>
Nov 23 04:30:03 localhost nova_compute[230084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='x2apic'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='stibp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='succor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lbrv'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='custom' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Dhyana-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-128'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-256'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-512'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:  </cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:  <memoryBacking supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='sourceType'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>anonymous</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>memfd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:    </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:  </memoryBacking>
Nov 23 04:30:03 localhost nova_compute[230084]:  <devices>
Nov 23 04:30:03 localhost nova_compute[230084]:    <disk supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='diskDevice'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>disk</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cdrom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>floppy</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>lun</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>fdc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>sata</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </disk>
Nov 23 04:30:03 localhost nova_compute[230084]:    <graphics supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vnc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egl-headless</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </graphics>
Nov 23 04:30:03 localhost nova_compute[230084]:    <video supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='modelType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vga</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cirrus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>none</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>bochs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ramfb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </video>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hostdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='mode'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>subsystem</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='startupPolicy'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>mandatory</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>requisite</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>optional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='subsysType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pci</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='capsType'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='pciBackend'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hostdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <rng supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>random</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </rng>
Nov 23 04:30:03 localhost nova_compute[230084]:    <filesystem supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='driverType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>path</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>handle</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtiofs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </filesystem>
Nov 23 04:30:03 localhost nova_compute[230084]:    <tpm supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-tis</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-crb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emulator</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>external</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendVersion'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>2.0</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </tpm>
Nov 23 04:30:03 localhost nova_compute[230084]:    <redirdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </redirdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <channel supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </channel>
Nov 23 04:30:03 localhost nova_compute[230084]:    <crypto supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </crypto>
Nov 23 04:30:03 localhost nova_compute[230084]:    <interface supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>passt</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </interface>
Nov 23 04:30:03 localhost nova_compute[230084]:    <panic supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>isa</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>hyperv</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </panic>
Nov 23 04:30:03 localhost nova_compute[230084]:    <console supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>null</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dev</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pipe</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stdio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>udp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tcp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu-vdagent</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </console>
Nov 23 04:30:03 localhost nova_compute[230084]:  </devices>
Nov 23 04:30:03 localhost nova_compute[230084]:  <features>
Nov 23 04:30:03 localhost nova_compute[230084]:    <gic supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <vmcoreinfo supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <genid supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backingStoreInput supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backup supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <async-teardown supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <ps2 supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sev supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sgx supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hyperv supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='features'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>relaxed</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vapic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>spinlocks</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vpindex</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>runtime</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>synic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stimer</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reset</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vendor_id</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>frequencies</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reenlightenment</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tlbflush</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ipi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>avic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emsr_bitmap</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>xmm_input</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:        <spinlocks>4095</spinlocks>
Nov 23 04:30:03 localhost nova_compute[230084]:        <stimer_direct>on</stimer_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:30:03 localhost nova_compute[230084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:30:03 localhost nova_compute[230084]:      </defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hyperv>
Nov 23 04:30:03 localhost nova_compute[230084]:    <launchSecurity supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='sectype'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tdx</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </launchSecurity>
Nov 23 04:30:03 localhost nova_compute[230084]:  </features>
Nov 23 04:30:03 localhost nova_compute[230084]: </domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.322 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 04:30:03 localhost nova_compute[230084]: <domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:30:03 localhost nova_compute[230084]:  <domain>kvm</domain>
Nov 23 04:30:03 localhost nova_compute[230084]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:30:03 localhost nova_compute[230084]:  <arch>x86_64</arch>
Nov 23 04:30:03 localhost nova_compute[230084]:  <vcpu max='240'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <iothreads supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:  <os supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='firmware'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <loader supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>rom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pflash</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='readonly'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>yes</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='secure'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>no</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </loader>
Nov 23 04:30:03 localhost nova_compute[230084]:  </os>
Nov 23 04:30:03 localhost nova_compute[230084]:  <cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='maximum' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='maximumMigratable'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>on</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>off</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='host-model' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <vendor>AMD</vendor>
Nov 23 04:30:03 localhost nova_compute[230084]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='x2apic'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='stibp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='succor'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lbrv'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:    <mode name='custom' supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Broadwell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Cooperlake-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Denverton-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Dhyana-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='auto-ibrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amd-psfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='no-nested-data-bp'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='null-sel-clr-base'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='stibp-always-on'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='EPYC-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-128'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-256'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx10-512'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='prefetchiti'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Haswell-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='IvyBridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='KnightsMill-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4fmaps'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-4vnniw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512er'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512pf'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fma4'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tbm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xop'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='amx-tile'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-bf16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-fp16'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bitalg'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vbmi2'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrc'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fzrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='la57'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='taa-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='tsx-ldtrk'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xfd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='SierraForest-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ifma'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-ne-convert'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx-vnni-int8'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='bus-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cmpccxadd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fbsdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='fsrs'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ibrs-all'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mcdt-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pbrsb-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='psdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='serialize'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vaes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='vpclmulqdq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='hle'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='rtm'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512bw'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512cd'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512dq'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512f'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='avx512vl'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='invpcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pcid'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='pku'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='mpx'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v2'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v3'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='core-capability'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='split-lock-detect'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='Snowridge-v4'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='cldemote'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='erms'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='gfni'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdir64b'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='movdiri'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='xsaves'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='athlon-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='core2duo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='coreduo-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='n270-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='ss'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <blockers model='phenom-v1'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnow'/>
Nov 23 04:30:03 localhost nova_compute[230084]:        <feature name='3dnowext'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      </blockers>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:30:03 localhost nova_compute[230084]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:30:03 localhost nova_compute[230084]:    </mode>
Nov 23 04:30:03 localhost nova_compute[230084]:  </cpu>
Nov 23 04:30:03 localhost nova_compute[230084]:  <memoryBacking supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:    <enum name='sourceType'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>anonymous</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      <value>memfd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:    </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:  </memoryBacking>
Nov 23 04:30:03 localhost nova_compute[230084]:  <devices>
Nov 23 04:30:03 localhost nova_compute[230084]:    <disk supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='diskDevice'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>disk</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cdrom</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>floppy</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>lun</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ide</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>fdc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>sata</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </disk>
Nov 23 04:30:03 localhost nova_compute[230084]:    <graphics supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vnc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egl-headless</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </graphics>
Nov 23 04:30:03 localhost nova_compute[230084]:    <video supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='modelType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vga</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>cirrus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>none</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>bochs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ramfb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </video>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hostdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='mode'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>subsystem</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='startupPolicy'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>mandatory</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>requisite</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>optional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='subsysType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pci</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>scsi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='capsType'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='pciBackend'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hostdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <rng supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtio-non-transitional</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>random</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>egd</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </rng>
Nov 23 04:30:03 localhost nova_compute[230084]:    <filesystem supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='driverType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>path</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>handle</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>virtiofs</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </filesystem>
Nov 23 04:30:03 localhost nova_compute[230084]:    <tpm supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-tis</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tpm-crb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emulator</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>external</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendVersion'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>2.0</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </tpm>
Nov 23 04:30:03 localhost nova_compute[230084]:    <redirdev supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='bus'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>usb</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </redirdev>
Nov 23 04:30:03 localhost nova_compute[230084]:    <channel supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </channel>
Nov 23 04:30:03 localhost nova_compute[230084]:    <crypto supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'/>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendModel'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>builtin</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </crypto>
Nov 23 04:30:03 localhost nova_compute[230084]:    <interface supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='backendType'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>default</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>passt</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </interface>
Nov 23 04:30:03 localhost nova_compute[230084]:    <panic supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='model'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>isa</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>hyperv</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </panic>
Nov 23 04:30:03 localhost nova_compute[230084]:    <console supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='type'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>null</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vc</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pty</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dev</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>file</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>pipe</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stdio</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>udp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tcp</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>unix</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>qemu-vdagent</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>dbus</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </console>
Nov 23 04:30:03 localhost nova_compute[230084]:  </devices>
Nov 23 04:30:03 localhost nova_compute[230084]:  <features>
Nov 23 04:30:03 localhost nova_compute[230084]:    <gic supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <vmcoreinfo supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <genid supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backingStoreInput supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <backup supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <async-teardown supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <ps2 supported='yes'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sev supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <sgx supported='no'/>
Nov 23 04:30:03 localhost nova_compute[230084]:    <hyperv supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='features'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>relaxed</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vapic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>spinlocks</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vpindex</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>runtime</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>synic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>stimer</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reset</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>vendor_id</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>frequencies</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>reenlightenment</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tlbflush</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>ipi</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>avic</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>emsr_bitmap</value>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>xmm_input</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:      <defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:        <spinlocks>4095</spinlocks>
Nov 23 04:30:03 localhost nova_compute[230084]:        <stimer_direct>on</stimer_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:30:03 localhost nova_compute[230084]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:30:03 localhost nova_compute[230084]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:30:03 localhost nova_compute[230084]:      </defaults>
Nov 23 04:30:03 localhost nova_compute[230084]:    </hyperv>
Nov 23 04:30:03 localhost nova_compute[230084]:    <launchSecurity supported='yes'>
Nov 23 04:30:03 localhost nova_compute[230084]:      <enum name='sectype'>
Nov 23 04:30:03 localhost nova_compute[230084]:        <value>tdx</value>
Nov 23 04:30:03 localhost nova_compute[230084]:      </enum>
Nov 23 04:30:03 localhost nova_compute[230084]:    </launchSecurity>
Nov 23 04:30:03 localhost nova_compute[230084]:  </features>
Nov 23 04:30:03 localhost nova_compute[230084]: </domainCapabilities>
Nov 23 04:30:03 localhost nova_compute[230084]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.392 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.393 230088 INFO nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Secure Boot support detected#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.395 230088 INFO nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.395 230088 INFO nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.406 230088 DEBUG nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.420 230088 INFO nova.virt.node [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.437 230088 DEBUG nova.compute.manager [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Verified node 1df367d3-e79d-4d54-9b3c-f6af3beffa8b matches my host np0005532586.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.485 230088 INFO nova.compute.manager [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.564 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.565 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.565 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.566 230088 DEBUG nova.compute.resource_tracker [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:30:03 localhost nova_compute[230084]: 2025-11-23 09:30:03.566 230088 DEBUG oslo_concurrency.processutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.028 230088 DEBUG oslo_concurrency.processutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.232 230088 WARNING nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.233 230088 DEBUG nova.compute.resource_tracker [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13586MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.233 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.233 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.323 230088 DEBUG nova.compute.resource_tracker [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.323 230088 DEBUG nova.compute.resource_tracker [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.373 230088 DEBUG nova.scheduler.client.report [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.389 230088 DEBUG nova.scheduler.client.report [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.390 230088 DEBUG nova.compute.provider_tree [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.405 230088 DEBUG nova.scheduler.client.report [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.428 230088 DEBUG nova.scheduler.client.report [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: HW_CPU_X86_BMI,COMPUTE_NODE,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.447 230088 DEBUG oslo_concurrency.processutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.918 230088 DEBUG oslo_concurrency.processutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.924 230088 DEBUG nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 23 04:30:04 localhost nova_compute[230084]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.924 230088 INFO nova.virt.libvirt.host [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.926 230088 DEBUG nova.compute.provider_tree [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.926 230088 DEBUG nova.virt.libvirt.driver [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.945 230088 DEBUG nova.scheduler.client.report [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.967 230088 DEBUG nova.compute.resource_tracker [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.968 230088 DEBUG oslo_concurrency.lockutils [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.968 230088 DEBUG nova.service [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.992 230088 DEBUG nova.service [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 23 04:30:04 localhost nova_compute[230084]: 2025-11-23 09:30:04.993 230088 DEBUG nova.servicegroup.drivers.db [None req-563d78c6-75e3-4676-8ed0-662f130b9c08 - - - - - -] DB_Driver: join new ServiceGroup member np0005532586.localdomain to the compute group, service = <Service: host=np0005532586.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 23 04:30:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10173 DF PROTO=TCP SPT=49052 DPT=9100 SEQ=1839795987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590E6DA0000000001030307) 
Nov 23 04:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35004 DF PROTO=TCP SPT=50046 DPT=9105 SEQ=37278830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590F0DA0000000001030307) 
Nov 23 04:30:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52054 DF PROTO=TCP SPT=57634 DPT=9102 SEQ=302898848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590F2DA0000000001030307) 
Nov 23 04:30:08 localhost sshd[230376]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:30:08 localhost systemd-logind[761]: New session 55 of user zuul.
Nov 23 04:30:08 localhost systemd[1]: Started Session 55 of User zuul.
Nov 23 04:30:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:30:09.227 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:30:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:30:09.228 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:30:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:30:09.228 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:30:09 localhost python3.9[230487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:30:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7691 DF PROTO=TCP SPT=55970 DPT=9882 SEQ=1790489802 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7590FDFC0000000001030307) 
Nov 23 04:30:11 localhost python3.9[230601]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:30:11 localhost systemd[1]: Reloading.
Nov 23 04:30:12 localhost systemd-rc-local-generator[230624]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:30:12 localhost systemd-sysv-generator[230628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24036 DF PROTO=TCP SPT=60460 DPT=9101 SEQ=2189288604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591065A0000000001030307) 
Nov 23 04:30:13 localhost python3.9[230744]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:30:13 localhost network[230761]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:30:13 localhost network[230762]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:30:13 localhost network[230763]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:30:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:30:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:30:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:30:16 localhost podman[230818]: 2025-11-23 09:30:16.178479679 +0000 UTC m=+0.085470451 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:30:16 localhost podman[230818]: 2025-11-23 09:30:16.214026715 +0000 UTC m=+0.121017447 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:30:16 localhost systemd[1]: tmp-crun.fCXjfg.mount: Deactivated successfully.
Nov 23 04:30:16 localhost podman[230819]: 2025-11-23 09:30:16.229498401 +0000 UTC m=+0.135228760 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 04:30:16 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:30:16 localhost podman[230819]: 2025-11-23 09:30:16.300014058 +0000 UTC m=+0.205744367 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:30:16 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24037 DF PROTO=TCP SPT=60460 DPT=9101 SEQ=2189288604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759116190000000001030307) 
Nov 23 04:30:18 localhost python3.9[231038]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:30:19 localhost python3.9[231149]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:19 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation.
Nov 23 04:30:19 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:30:19 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:30:19 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:30:19 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:30:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59494 DF PROTO=TCP SPT=38738 DPT=9100 SEQ=3022103697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759120960000000001030307) 
Nov 23 04:30:20 localhost python3.9[231260]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:21 localhost python3.9[231370]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:30:22 localhost python3.9[231480]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:30:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59496 DF PROTO=TCP SPT=38738 DPT=9100 SEQ=3022103697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75912C990000000001030307) 
Nov 23 04:30:23 localhost python3.9[231590]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:30:23 localhost systemd[1]: Reloading.
Nov 23 04:30:23 localhost systemd-rc-local-generator[231611]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:30:23 localhost systemd-sysv-generator[231619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:25 localhost python3.9[231735]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:30:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24815 DF PROTO=TCP SPT=58548 DPT=9105 SEQ=4038274938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759135190000000001030307) 
Nov 23 04:30:26 localhost python3.9[231846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:30:27 localhost python3.9[231954]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:30:28 localhost python3.9[232064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:28 localhost python3.9[232150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890228.0084395-362-208433168518335/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=f66bfee7620b2dff2ba858e01c5b9d703edb0590 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:30:29 localhost podman[232168]: 2025-11-23 09:30:29.178806638 +0000 UTC m=+0.082547794 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 23 04:30:29 localhost podman[232168]: 2025-11-23 09:30:29.199933364 +0000 UTC m=+0.103674520 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:30:29 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:30:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24816 DF PROTO=TCP SPT=58548 DPT=9105 SEQ=4038274938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759144DA0000000001030307) 
Nov 23 04:30:29 localhost python3.9[232279]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 23 04:30:30 localhost python3.9[232389]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 23 04:30:31 localhost python3.9[232500]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 23 04:30:32 localhost python3.9[232616]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005532586.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 23 04:30:34 localhost python3.9[232732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:35 localhost python3.9[232818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890233.663453-566-61794313032031/.source.conf _original_basename=ceilometer.conf follow=False checksum=950edd520595720a58ffe786d84e54d033109e91 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59498 DF PROTO=TCP SPT=38738 DPT=9100 SEQ=3022103697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75915CD90000000001030307) 
Nov 23 04:30:35 localhost python3.9[232926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:36 localhost python3.9[233012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890235.2793458-566-16881484868219/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:36 localhost python3.9[233120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24817 DF PROTO=TCP SPT=58548 DPT=9105 SEQ=4038274938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759164DA0000000001030307) 
Nov 23 04:30:37 localhost python3.9[233206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1763890236.3741813-566-15196495944842/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:38 localhost python3.9[233314]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:30:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16291 DF PROTO=TCP SPT=43972 DPT=9102 SEQ=1996815502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759168D90000000001030307) 
Nov 23 04:30:38 localhost python3.9[233422]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:30:40 localhost python3.9[233530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:41 localhost python3.9[233616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890240.1616225-742-139405471837363/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26819 DF PROTO=TCP SPT=34972 DPT=9882 SEQ=4043116653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591732D0000000001030307) 
Nov 23 04:30:41 localhost python3.9[233724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:42 localhost python3.9[233779]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:42 localhost python3.9[233887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:43 localhost python3.9[233973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890242.3624125-742-230666951880099/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53036 DF PROTO=TCP SPT=51286 DPT=9101 SEQ=665108564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75917B9A0000000001030307) 
Nov 23 04:30:43 localhost python3.9[234081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:44 localhost python3.9[234167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890243.4637895-742-230961588990238/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:45 localhost python3.9[234275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:45 localhost python3.9[234361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890244.5404882-742-271573252324722/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:46 localhost python3.9[234512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:46 localhost python3.9[234622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890245.7226446-742-226071756955265/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:30:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:30:47 localhost systemd[1]: tmp-crun.YFUl4Z.mount: Deactivated successfully.
Nov 23 04:30:47 localhost podman[234748]: 2025-11-23 09:30:47.124852407 +0000 UTC m=+0.092972218 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 04:30:47 localhost podman[234749]: 2025-11-23 09:30:47.164983843 +0000 UTC m=+0.133130714 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:30:47 localhost podman[234748]: 2025-11-23 09:30:47.184649401 +0000 UTC m=+0.152769192 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 23 04:30:47 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:30:47 localhost python3.9[234750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:47 localhost podman[234749]: 2025-11-23 09:30:47.231174086 +0000 UTC m=+0.199320957 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:30:47 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53037 DF PROTO=TCP SPT=51286 DPT=9101 SEQ=665108564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75918B5A0000000001030307) 
Nov 23 04:30:47 localhost python3.9[234876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890246.7883792-742-25007454932082/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:48 localhost python3.9[234984]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:49 localhost python3.9[235070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890248.4769697-742-221634656197747/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:49 localhost python3.9[235178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45364 DF PROTO=TCP SPT=53298 DPT=9100 SEQ=48698020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759195C70000000001030307) 
Nov 23 04:30:50 localhost python3.9[235264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890249.4606924-742-207392764690267/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:51 localhost python3.9[235372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:51 localhost python3.9[235458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890250.996871-742-266436748869493/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:52 localhost python3.9[235566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:52 localhost python3.9[235652]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890252.030476-742-169643132465641/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:30:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45366 DF PROTO=TCP SPT=53298 DPT=9100 SEQ=48698020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591A1D90000000001030307) 
Nov 23 04:30:53 localhost python3.9[235762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:30:54 localhost python3.9[235872]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:30:54 localhost systemd[1]: Reloading.
Nov 23 04:30:54 localhost systemd-rc-local-generator[235897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:30:54 localhost systemd-sysv-generator[235905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:30:54 localhost systemd[1]: Listening on Podman API Socket.
Nov 23 04:30:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27043 DF PROTO=TCP SPT=55062 DPT=9105 SEQ=38898970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591AA190000000001030307) 
Nov 23 04:30:55 localhost python3.9[236022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:56 localhost python3.9[236110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890255.2624743-1259-103226997006605/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:30:56 localhost python3.9[236165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:30:57 localhost python3.9[236253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890255.2624743-1259-103226997006605/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:30:58 localhost python3.9[236363]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 23 04:30:59 localhost python3.9[236473]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:30:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27044 DF PROTO=TCP SPT=55062 DPT=9105 SEQ=38898970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591B9D90000000001030307) 
Nov 23 04:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:31:00 localhost podman[236562]: 2025-11-23 09:31:00.166026823 +0000 UTC m=+0.067837247 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:31:00 localhost podman[236562]: 2025-11-23 09:31:00.17881773 +0000 UTC m=+0.080628184 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:31:00 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:31:00 localhost nova_compute[230084]: 2025-11-23 09:31:00.996 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:01 localhost nova_compute[230084]: 2025-11-23 09:31:01.008 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:01 localhost python3[236602]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:31:01 localhost python3[236602]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "5b3bac081df6146e06acefa72320d250dc7d5f82abc7fbe0b9e83aec1e1587f5",#012          "Digest": "sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:9f9f367ed4c85efb16c3a74a4bb707ff0db271d7bc5abc70a71e984b55f43003"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:23:50.144134741Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 505196287,#012          "VirtualSize": 505196287,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012                    "sha256:4ff7b15b3989ce3486d1ee120e82ba5b4acb5e4ad1d931e92c8d8e0851a32a6a",#012                    "sha256:847ae301d478780c04ade872e138a0bd4b67a423f03bd51e3a177105d1684cb3"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012           
Nov 23 04:31:01 localhost podman[236652]: 2025-11-23 09:31:01.411821384 +0000 UTC m=+0.081513797 container remove e964457b349878d85fd2feadb00697d6180d0cb849fa5738a1c40248842b2931 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da5facbcd2df03440dc3d35420cadd63'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Nov 23 04:31:01 localhost python3[236602]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute
Nov 23 04:31:01 localhost podman[236666]: 
Nov 23 04:31:01 localhost podman[236666]: 2025-11-23 09:31:01.515734269 +0000 UTC m=+0.085403479 container create 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm)
Nov 23 04:31:01 localhost podman[236666]: 2025-11-23 09:31:01.474974616 +0000 UTC m=+0.044643876 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Nov 23 04:31:01 localhost python3[236602]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Nov 23 04:31:02 localhost python3.9[236813]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.549 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.550 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.551 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.551 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.563 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.564 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.565 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.565 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.566 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.566 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.566 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.567 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.567 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.583 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.583 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.584 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.584 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:31:02 localhost nova_compute[230084]: 2025-11-23 09:31:02.585 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.045 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.245 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.247 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13589MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.247 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.247 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.313 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.313 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.341 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:31:03 localhost python3.9[236949]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.816 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.823 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.841 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.843 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:31:03 localhost nova_compute[230084]: 2025-11-23 09:31:03.844 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:31:04 localhost python3.9[237078]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890263.7364917-1451-38479930987181/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:05 localhost python3.9[237133]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:31:05 localhost systemd[1]: Reloading.
Nov 23 04:31:05 localhost systemd-rc-local-generator[237160]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:05 localhost systemd-sysv-generator[237163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45368 DF PROTO=TCP SPT=53298 DPT=9100 SEQ=48698020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591D2DA0000000001030307) 
Nov 23 04:31:05 localhost python3.9[237223]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:31:06 localhost systemd[1]: Reloading.
Nov 23 04:31:06 localhost systemd-sysv-generator[237251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:06 localhost systemd-rc-local-generator[237247]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:06 localhost systemd[1]: Starting ceilometer_agent_compute container...
Nov 23 04:31:06 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abacda98cfe0d7c098f9e7d4264336d7c401347a1a6b1ec5ed803eeb82107421/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 23 04:31:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abacda98cfe0d7c098f9e7d4264336d7c401347a1a6b1ec5ed803eeb82107421/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 23 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:06 localhost podman[237264]: 2025-11-23 09:31:06.558589605 +0000 UTC m=+0.139361880 container init 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + sudo -E kolla_set_configs
Nov 23 04:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: sudo: unable to send audit message: Operation not permitted
Nov 23 04:31:06 localhost podman[237264]: 2025-11-23 09:31:06.589474628 +0000 UTC m=+0.170246863 container start 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Nov 23 04:31:06 localhost podman[237264]: ceilometer_agent_compute
Nov 23 04:31:06 localhost systemd[1]: Started ceilometer_agent_compute container.
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Validating config file
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Copying service configuration files
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: INFO:__main__:Writing out command to execute
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: ++ cat /run_command
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + ARGS=
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + sudo kolla_copy_cacerts
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: sudo: unable to send audit message: Operation not permitted
Nov 23 04:31:06 localhost podman[237285]: 2025-11-23 09:31:06.676998571 +0000 UTC m=+0.083499078 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + [[ ! -n '' ]]
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + . kolla_extend_start
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + umask 0022
Nov 23 04:31:06 localhost ceilometer_agent_compute[237278]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 23 04:31:06 localhost podman[237285]: 2025-11-23 09:31:06.706034806 +0000 UTC m=+0.112535353 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 23 04:31:06 localhost podman[237285]: unhealthy
Nov 23 04:31:06 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:31:06 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.397 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.397 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.397 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.397 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.398 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.399 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.400 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.401 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.402 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.403 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.407 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.408 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.411 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.429 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.430 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.431 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.525 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.594 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.595 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.596 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.597 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.598 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.599 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.600 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.601 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.602 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.603 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.604 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.605 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.606 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.607 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.608 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.609 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.610 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.611 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.614 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.623 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:07.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27045 DF PROTO=TCP SPT=55062 DPT=9105 SEQ=38898970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591DAD90000000001030307) 
Nov 23 04:31:08 localhost python3.9[237424]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:31:08 localhost systemd[1]: Stopping ceilometer_agent_compute container...
Nov 23 04:31:08 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:08.529 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 23 04:31:08 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:08.631 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 23 04:31:08 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:08.631 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 23 04:31:08 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:08.631 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 23 04:31:08 localhost ceilometer_agent_compute[237278]: 2025-11-23 09:31:08.639 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 23 04:31:08 localhost journal[229448]: End of file while reading data: Input/output error
Nov 23 04:31:08 localhost journal[229448]: End of file while reading data: Input/output error
Nov 23 04:31:08 localhost systemd[1]: libpod-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope: Deactivated successfully.
Nov 23 04:31:08 localhost systemd[1]: libpod-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope: Consumed 1.206s CPU time.
Nov 23 04:31:08 localhost podman[237428]: 2025-11-23 09:31:08.786298301 +0000 UTC m=+0.321262516 container died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:08 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.timer: Deactivated successfully.
Nov 23 04:31:08 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135-userdata-shm.mount: Deactivated successfully.
Nov 23 04:31:08 localhost systemd[1]: var-lib-containers-storage-overlay-abacda98cfe0d7c098f9e7d4264336d7c401347a1a6b1ec5ed803eeb82107421-merged.mount: Deactivated successfully.
Nov 23 04:31:08 localhost podman[237428]: 2025-11-23 09:31:08.841081234 +0000 UTC m=+0.376045429 container cleanup 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:31:08 localhost podman[237428]: ceilometer_agent_compute
Nov 23 04:31:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9301 DF PROTO=TCP SPT=56564 DPT=9102 SEQ=3639844190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591DED90000000001030307) 
Nov 23 04:31:08 localhost podman[237456]: 2025-11-23 09:31:08.947742491 +0000 UTC m=+0.069392378 container cleanup 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:31:08 localhost podman[237456]: ceilometer_agent_compute
Nov 23 04:31:08 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 23 04:31:08 localhost systemd[1]: Stopped ceilometer_agent_compute container.
Nov 23 04:31:08 localhost systemd[1]: Starting ceilometer_agent_compute container...
Nov 23 04:31:09 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abacda98cfe0d7c098f9e7d4264336d7c401347a1a6b1ec5ed803eeb82107421/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 23 04:31:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abacda98cfe0d7c098f9e7d4264336d7c401347a1a6b1ec5ed803eeb82107421/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 23 04:31:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:09 localhost podman[237470]: 2025-11-23 09:31:09.113367181 +0000 UTC m=+0.133538217 container init 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + sudo -E kolla_set_configs
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: sudo: unable to send audit message: Operation not permitted
Nov 23 04:31:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:09 localhost podman[237470]: 2025-11-23 09:31:09.15627648 +0000 UTC m=+0.176447546 container start 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:31:09 localhost podman[237470]: ceilometer_agent_compute
Nov 23 04:31:09 localhost systemd[1]: Started ceilometer_agent_compute container.
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Validating config file
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Copying service configuration files
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: INFO:__main__:Writing out command to execute
Nov 23 04:31:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:31:09.228 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:31:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:31:09.229 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:31:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:31:09.229 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: ++ cat /run_command
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + ARGS=
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + sudo kolla_copy_cacerts
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: sudo: unable to send audit message: Operation not permitted
Nov 23 04:31:09 localhost podman[237493]: 2025-11-23 09:31:09.249973466 +0000 UTC m=+0.085670726 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + [[ ! -n '' ]]
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + . kolla_extend_start
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + umask 0022
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 23 04:31:09 localhost podman[237493]: 2025-11-23 09:31:09.28049687 +0000 UTC m=+0.116194090 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:31:09 localhost podman[237493]: unhealthy
Nov 23 04:31:09 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:31:09 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.993 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.994 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.995 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.996 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.997 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.998 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:09 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:09.999 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.000 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.001 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.002 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.003 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.004 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.005 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.006 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.024 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.025 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.026 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.040 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.152 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.153 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.154 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.155 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.156 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.157 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.158 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.159 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.160 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.161 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.162 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.163 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.164 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.166 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.167 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.168 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.169 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.170 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.171 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.172 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.172 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.172 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.172 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.172 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.175 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.184 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:31:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:31:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63794 DF PROTO=TCP SPT=60228 DPT=9882 SEQ=4270947215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591E85D0000000001030307) 
Nov 23 04:31:12 localhost python3.9[237631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:31:12 localhost python3.9[237719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890271.7771862-1547-268821588204563/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49110 DF PROTO=TCP SPT=53484 DPT=9101 SEQ=2049908803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7591F0990000000001030307) 
Nov 23 04:31:14 localhost python3.9[237829]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 23 04:31:14 localhost python3.9[237939]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49111 DF PROTO=TCP SPT=53484 DPT=9101 SEQ=2049908803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759200590000000001030307) 
Nov 23 04:31:17 localhost python3[238049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:31:17 localhost podman[238089]: 
Nov 23 04:31:17 localhost podman[238089]: 2025-11-23 09:31:17.910586277 +0000 UTC m=+0.076875857 container create bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Nov 23 04:31:17 localhost podman[238089]: 2025-11-23 09:31:17.87142089 +0000 UTC m=+0.037710510 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Nov 23 04:31:17 localhost python3[238049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 23 04:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:31:18 localhost podman[238127]: 2025-11-23 09:31:18.169326188 +0000 UTC m=+0.071977100 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 04:31:18 localhost podman[238126]: 2025-11-23 09:31:18.235686272 +0000 UTC m=+0.137741889 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 04:31:18 localhost podman[238126]: 2025-11-23 09:31:18.245965389 +0000 UTC m=+0.148021026 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:31:18 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:31:18 localhost podman[238127]: 2025-11-23 09:31:18.25986114 +0000 UTC m=+0.162512032 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 23 04:31:18 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:31:18 localhost python3.9[238278]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:31:19 localhost python3.9[238390]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35616 DF PROTO=TCP SPT=54374 DPT=9100 SEQ=2392625671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75920AF70000000001030307) 
Nov 23 04:31:20 localhost python3.9[238499]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890279.6938484-1706-82362661709945/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:20 localhost python3.9[238554]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:31:20 localhost systemd[1]: Reloading.
Nov 23 04:31:20 localhost systemd-rc-local-generator[238576]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:20 localhost systemd-sysv-generator[238583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:21 localhost python3.9[238644]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:31:21 localhost systemd[1]: Reloading.
Nov 23 04:31:22 localhost systemd-rc-local-generator[238672]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:22 localhost systemd-sysv-generator[238675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:22 localhost systemd[1]: Starting node_exporter container...
Nov 23 04:31:22 localhost systemd[1]: tmp-crun.ml5mM2.mount: Deactivated successfully.
Nov 23 04:31:22 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:22 localhost sshd[238700]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:22 localhost podman[238685]: 2025-11-23 09:31:22.432735091 +0000 UTC m=+0.164792342 container init bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.451Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.451Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.451Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.452Z caller=node_exporter.go:117 level=info collector=arp
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=bcache
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=bonding
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=cpu
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=edac
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=filefd
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=netclass
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=netdev
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=netstat
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=nfs
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=nvme
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=softnet
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=systemd
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=xfs
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=node_exporter.go:117 level=info collector=zfs
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.453Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 23 04:31:22 localhost node_exporter[238697]: ts=2025-11-23T09:31:22.454Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 23 04:31:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:22 localhost podman[238685]: 2025-11-23 09:31:22.474105555 +0000 UTC m=+0.206162796 container start bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:31:22 localhost podman[238685]: node_exporter
Nov 23 04:31:22 localhost systemd[1]: Started node_exporter container.
Nov 23 04:31:22 localhost podman[238708]: 2025-11-23 09:31:22.570809057 +0000 UTC m=+0.091653032 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:31:22 localhost podman[238708]: 2025-11-23 09:31:22.581871044 +0000 UTC m=+0.102714979 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:31:22 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:31:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35618 DF PROTO=TCP SPT=54374 DPT=9100 SEQ=2392625671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592171A0000000001030307) 
Nov 23 04:31:24 localhost python3.9[238840]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:31:24 localhost systemd[1]: Stopping node_exporter container...
Nov 23 04:31:24 localhost systemd[1]: libpod-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.scope: Deactivated successfully.
Nov 23 04:31:24 localhost podman[238844]: 2025-11-23 09:31:24.386809367 +0000 UTC m=+0.079026724 container died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:24 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.timer: Deactivated successfully.
Nov 23 04:31:24 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:24 localhost systemd[1]: tmp-crun.RzFBvt.mount: Deactivated successfully.
Nov 23 04:31:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799-userdata-shm.mount: Deactivated successfully.
Nov 23 04:31:24 localhost systemd[1]: var-lib-containers-storage-overlay-463d1d802eb6c258244d07ba6311f02502fc11a59a23c943fedde66f70728874-merged.mount: Deactivated successfully.
Nov 23 04:31:24 localhost podman[238844]: 2025-11-23 09:31:24.50316135 +0000 UTC m=+0.195378717 container cleanup bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:31:24 localhost podman[238844]: node_exporter
Nov 23 04:31:24 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 23 04:31:24 localhost podman[238871]: 2025-11-23 09:31:24.608726741 +0000 UTC m=+0.068463109 container cleanup bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:31:24 localhost podman[238871]: node_exporter
Nov 23 04:31:24 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 23 04:31:24 localhost systemd[1]: Stopped node_exporter container.
Nov 23 04:31:24 localhost systemd[1]: Starting node_exporter container...
Nov 23 04:31:24 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:24 localhost podman[238884]: 2025-11-23 09:31:24.813522392 +0000 UTC m=+0.135327357 container init bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.827Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.827Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.827Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.827Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.827Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.828Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.828Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.828Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.828Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=arp
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=bcache
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=bonding
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=cpu
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=edac
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=filefd
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=netclass
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=netdev
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=netstat
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=nfs
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=nvme
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=softnet
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=systemd
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=xfs
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.829Z caller=node_exporter.go:117 level=info collector=zfs
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.830Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 23 04:31:24 localhost node_exporter[238898]: ts=2025-11-23T09:31:24.830Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 23 04:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:24 localhost podman[238884]: 2025-11-23 09:31:24.846623291 +0000 UTC m=+0.168428216 container start bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:31:24 localhost podman[238884]: node_exporter
Nov 23 04:31:24 localhost systemd[1]: Started node_exporter container.
Nov 23 04:31:24 localhost podman[238907]: 2025-11-23 09:31:24.936810664 +0000 UTC m=+0.088104320 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:31:24 localhost podman[238907]: 2025-11-23 09:31:24.971845013 +0000 UTC m=+0.123138629 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:24 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:31:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2359 DF PROTO=TCP SPT=46880 DPT=9105 SEQ=1393106635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75921F590000000001030307) 
Nov 23 04:31:26 localhost python3.9[239037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:31:27 localhost python3.9[239125]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890285.9416604-1802-34696568579080/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:31:27 localhost sshd[239126]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:31:28 localhost python3.9[239237]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 23 04:31:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2360 DF PROTO=TCP SPT=46880 DPT=9105 SEQ=1393106635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75922F190000000001030307) 
Nov 23 04:31:29 localhost python3.9[239347]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:31:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:31:30 localhost systemd[1]: tmp-crun.JeGV4q.mount: Deactivated successfully.
Nov 23 04:31:30 localhost podman[239458]: 2025-11-23 09:31:30.402632358 +0000 UTC m=+0.088579132 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:31:30 localhost podman[239458]: 2025-11-23 09:31:30.41890018 +0000 UTC m=+0.104846904 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 23 04:31:30 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:31:30 localhost python3[239457]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:31:30 localhost sshd[239503]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:31:32 localhost podman[239490]: 2025-11-23 09:31:30.713628806 +0000 UTC m=+0.044248960 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Nov 23 04:31:32 localhost podman[239562]: 
Nov 23 04:31:32 localhost podman[239562]: 2025-11-23 09:31:32.64949187 +0000 UTC m=+0.086446126 container create 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter)
Nov 23 04:31:32 localhost podman[239562]: 2025-11-23 09:31:32.608570347 +0000 UTC m=+0.045524643 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Nov 23 04:31:32 localhost python3[239457]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Nov 23 04:31:33 localhost python3.9[239710]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:31:34 localhost python3.9[239822]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:35 localhost python3.9[239931]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890294.4336562-1960-146669159622742/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:31:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35620 DF PROTO=TCP SPT=54374 DPT=9100 SEQ=2392625671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759246DA0000000001030307) 
Nov 23 04:31:35 localhost python3.9[239986]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:31:35 localhost systemd[1]: Reloading.
Nov 23 04:31:35 localhost systemd-rc-local-generator[240011]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:35 localhost systemd-sysv-generator[240016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost python3.9[240077]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:31:36 localhost systemd[1]: Reloading.
Nov 23 04:31:36 localhost systemd-sysv-generator[240104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:31:36 localhost systemd-rc-local-generator[240101]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:31:36 localhost systemd[1]: Starting podman_exporter container...
Nov 23 04:31:37 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:31:37 localhost podman[240118]: 2025-11-23 09:31:37.102765384 +0000 UTC m=+0.138021306 container init 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:37 localhost podman_exporter[240133]: ts=2025-11-23T09:31:37.119Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 23 04:31:37 localhost podman_exporter[240133]: ts=2025-11-23T09:31:37.119Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 23 04:31:37 localhost podman_exporter[240133]: ts=2025-11-23T09:31:37.119Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 23 04:31:37 localhost podman_exporter[240133]: ts=2025-11-23T09:31:37.119Z caller=handler.go:105 level=info collector=container
Nov 23 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:31:37 localhost podman[240118]: 2025-11-23 09:31:37.13651228 +0000 UTC m=+0.171768142 container start 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:31:37 localhost podman[240118]: podman_exporter
Nov 23 04:31:37 localhost systemd[1]: Starting Podman API Service...
Nov 23 04:31:37 localhost systemd[1]: Started Podman API Service.
Nov 23 04:31:37 localhost systemd[1]: Started podman_exporter container.
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="Setting parallel job count to 25"
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Nov 23 04:31:37 localhost podman[240144]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 23 04:31:37 localhost podman[240144]: time="2025-11-23T09:31:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:31:37 localhost podman[240143]: 2025-11-23 09:31:37.277957375 +0000 UTC m=+0.135070460 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:37 localhost podman[240143]: 2025-11-23 09:31:37.291860106 +0000 UTC m=+0.148973141 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:37 localhost podman[240143]: unhealthy
Nov 23 04:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2361 DF PROTO=TCP SPT=46880 DPT=9105 SEQ=1393106635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75924EDA0000000001030307) 
Nov 23 04:31:38 localhost python3.9[240289]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:38 localhost systemd[1]: Stopping podman_exporter container...
Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48098 DF PROTO=TCP SPT=41440 DPT=9102 SEQ=3772501595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759252D90000000001030307) 
Nov 23 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:38 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:31:38 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:31:38 localhost podman[240144]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1"
Nov 23 04:31:38 localhost systemd[1]: libpod-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.scope: Deactivated successfully.
Nov 23 04:31:38 localhost podman[240293]: 2025-11-23 09:31:38.656580324 +0000 UTC m=+0.305325191 container died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:31:38 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.timer: Deactivated successfully.
Nov 23 04:31:38 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694-userdata-shm.mount: Deactivated successfully.
Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:39 localhost podman[240318]: 2025-11-23 09:31:39.782128671 +0000 UTC m=+0.204120773 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:31:39 localhost podman[240318]: 2025-11-23 09:31:39.814981374 +0000 UTC m=+0.236973536 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:31:39 localhost podman[240318]: unhealthy
Nov 23 04:31:39 localhost podman[240293]: 2025-11-23 09:31:39.868860894 +0000 UTC m=+1.517605751 container cleanup 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:39 localhost podman[240293]: podman_exporter
Nov 23 04:31:39 localhost podman[240305]: 2025-11-23 09:31:39.879609093 +0000 UTC m=+1.221725125 container cleanup 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-4eee1460b4bee1846bb0e39a837e31731096884bdf6658484746ee571c454179-merged.mount: Deactivated successfully.
Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:40 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:31:40 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:31:40 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 23 04:31:40 localhost podman[240337]: 2025-11-23 09:31:40.51893965 +0000 UTC m=+0.074755534 container cleanup 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:31:40 localhost podman[240337]: podman_exporter
Nov 23 04:31:40 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 23 04:31:40 localhost systemd[1]: Stopped podman_exporter container.
Nov 23 04:31:40 localhost systemd[1]: Starting podman_exporter container...
Nov 23 04:31:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30442 DF PROTO=TCP SPT=33220 DPT=9882 SEQ=1343026443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75925D8D0000000001030307) 
Nov 23 04:31:41 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:41 localhost systemd[1]: var-lib-containers-storage-overlay-98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110-merged.mount: Deactivated successfully.
Nov 23 04:31:42 localhost systemd[1]: var-lib-containers-storage-overlay-98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110-merged.mount: Deactivated successfully.
Nov 23 04:31:42 localhost systemd[1]: Started libcrun container.
Nov 23 04:31:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:31:42 localhost podman[240350]: 2025-11-23 09:31:42.12205984 +0000 UTC m=+1.290902932 container init 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:42 localhost podman_exporter[240364]: ts=2025-11-23T09:31:42.140Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 23 04:31:42 localhost podman_exporter[240364]: ts=2025-11-23T09:31:42.140Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 23 04:31:42 localhost podman[240144]: @ - - [23/Nov/2025:09:31:42 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 23 04:31:42 localhost podman[240144]: time="2025-11-23T09:31:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:31:42 localhost podman_exporter[240364]: ts=2025-11-23T09:31:42.140Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 23 04:31:42 localhost podman_exporter[240364]: ts=2025-11-23T09:31:42.140Z caller=handler.go:105 level=info collector=container
Nov 23 04:31:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:31:42 localhost podman[240350]: 2025-11-23 09:31:42.214152713 +0000 UTC m=+1.382995785 container start 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:42 localhost podman[240350]: podman_exporter
Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:43 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4666 DF PROTO=TCP SPT=34620 DPT=9101 SEQ=956828352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759265D90000000001030307) 
Nov 23 04:31:43 localhost systemd[1]: Started podman_exporter container.
Nov 23 04:31:43 localhost podman[240373]: 2025-11-23 09:31:43.672863433 +0000 UTC m=+1.507459307 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:31:43 localhost podman[240373]: 2025-11-23 09:31:43.683965771 +0000 UTC m=+1.518561645 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:31:43 localhost podman[240373]: unhealthy
Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Nov 23 04:31:44 localhost sshd[240412]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:44 localhost podman[240144]: time="2025-11-23T09:31:44Z" level=error msg="Getting root fs size for \"00e1e058b1727a445e9174e4c3b590c1e7533f3fa9eef1831fa53f41cd4ae5be\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:44 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:31:45 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:31:45 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:31:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:31:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:31:45 localhost python3.9[240505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:31:45 localhost python3.9[240593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890304.5244951-2057-146091388628204/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:31:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:46 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:31:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:31:46 localhost python3.9[240703]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-98aa654fe2c1af9d2382bcadf7b54249f9a3b56612c5557d7ee9d5ac58709110-merged.mount: Deactivated successfully.
Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:31:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:31:47 localhost python3.9[240830]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4667 DF PROTO=TCP SPT=34620 DPT=9101 SEQ=956828352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592759A0000000001030307) 
Nov 23 04:31:48 localhost python3[240972]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:31:48 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Nov 23 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:31:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:31:48 localhost systemd[1]: var-lib-containers-storage-overlay-49811fcc3e5d752fe49ab74a12b54f8b5604be5b8ba1bcaf72dfc24524c4f335-merged.mount: Deactivated successfully.
Nov 23 04:31:48 localhost podman[240987]: 2025-11-23 09:31:48.721760187 +0000 UTC m=+0.142840471 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 04:31:48 localhost podman[240986]: 2025-11-23 09:31:48.785701909 +0000 UTC m=+0.207288066 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:31:48 localhost podman[240987]: 2025-11-23 09:31:48.808042379 +0000 UTC m=+0.229122623 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true)
Nov 23 04:31:48 localhost podman[240986]: 2025-11-23 09:31:48.81618 +0000 UTC m=+0.237766167 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:31:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35140 DF PROTO=TCP SPT=33976 DPT=9100 SEQ=865164406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759280260000000001030307) 
Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Nov 23 04:31:50 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Nov 23 04:31:50 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:31:50 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:31:51 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:31:51 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:31:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:51 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35142 DF PROTO=TCP SPT=33976 DPT=9100 SEQ=865164406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75928C190000000001030307) 
Nov 23 04:31:53 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:31:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:31:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:31:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:31:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:31:55 localhost podman[241081]: 2025-11-23 09:31:55.185721549 +0000 UTC m=+0.086318403 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:31:55 localhost podman[241081]: 2025-11-23 09:31:55.219811755 +0000 UTC m=+0.120408579 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:31:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17464 DF PROTO=TCP SPT=47282 DPT=9105 SEQ=3438105685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759294990000000001030307) 
Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:31:55 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:31:56 localhost systemd[1]: var-lib-containers-storage-overlay-49811fcc3e5d752fe49ab74a12b54f8b5604be5b8ba1bcaf72dfc24524c4f335-merged.mount: Deactivated successfully.
Nov 23 04:31:56 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:31:57 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:31:57 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:31:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:31:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17465 DF PROTO=TCP SPT=47282 DPT=9105 SEQ=3438105685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592A4590000000001030307) 
Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:31:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:32:00 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:00 localhost podman[241138]: 2025-11-23 09:32:00.681602274 +0000 UTC m=+0.085848920 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 23 04:32:00 localhost podman[241138]: 2025-11-23 09:32:00.815878721 +0000 UTC m=+0.220125327 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:32:00 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:32:01 localhost podman[241049]: 2025-11-23 09:31:50.33842073 +0000 UTC m=+0.037439323 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Nov 23 04:32:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:32:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:32:03 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:32:03 localhost systemd[1]: var-lib-containers-storage-overlay-02c1e8ec154b353c1f5742760d5a341313065b707b9f4dfe4e57636918f18c91-merged.mount: Deactivated successfully.
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.836 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.836 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.851 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.851 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.851 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.859 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.860 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.861 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.861 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.861 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.862 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.862 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.862 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.876 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.877 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.877 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.877 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:32:03 localhost nova_compute[230084]: 2025-11-23 09:32:03.878 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.328 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.516 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.518 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13235MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.518 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.519 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.570 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.571 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:32:04 localhost nova_compute[230084]: 2025-11-23 09:32:04.587 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.037 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.044 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.063 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.065 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.066 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35144 DF PROTO=TCP SPT=33976 DPT=9100 SEQ=865164406 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592BCD90000000001030307) 
Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:05 localhost nova_compute[230084]: 2025-11-23 09:32:05.751 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:32:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:32:06 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17466 DF PROTO=TCP SPT=47282 DPT=9105 SEQ=3438105685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592C4D90000000001030307) 
Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:08 localhost podman[241223]: 2025-11-23 09:32:06.422255398 +0000 UTC m=+0.059639160 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Nov 23 04:32:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49760 DF PROTO=TCP SPT=55734 DPT=9102 SEQ=1824046712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592C8DA0000000001030307) 
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:32:09.229 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:32:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:32:09.229 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:32:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:32:09.229 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:09 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:10 localhost podman[241236]: 2025-11-23 09:32:10.67485613 +0000 UTC m=+0.086366675 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:32:10 localhost podman[241236]: 2025-11-23 09:32:10.70680766 +0000 UTC m=+0.118318225 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:32:10 localhost podman[241236]: unhealthy
Nov 23 04:32:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24597 DF PROTO=TCP SPT=35066 DPT=9882 SEQ=4106815780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592D2BD0000000001030307) 
Nov 23 04:32:11 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:11 localhost systemd[1]: var-lib-containers-storage-overlay-83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7-merged.mount: Deactivated successfully.
Nov 23 04:32:12 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:32:12 localhost systemd[1]: var-lib-containers-storage-overlay-02c1e8ec154b353c1f5742760d5a341313065b707b9f4dfe4e57636918f18c91-merged.mount: Deactivated successfully.
Nov 23 04:32:13 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:32:13 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38195 DF PROTO=TCP SPT=44984 DPT=9101 SEQ=2153180187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592DB190000000001030307) 
Nov 23 04:32:13 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:32:13 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:32:13 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:32:14 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:32:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:15 localhost podman[241252]: 2025-11-23 09:32:15.172440444 +0000 UTC m=+0.078466889 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:32:15 localhost podman[241252]: 2025-11-23 09:32:15.179157989 +0000 UTC m=+0.085184414 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:32:15 localhost podman[241252]: unhealthy
Nov 23 04:32:15 localhost podman[241223]: 
Nov 23 04:32:15 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:32:15 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:32:15 localhost podman[241223]: 2025-11-23 09:32:15.580302489 +0000 UTC m=+9.217686211 container create 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:16 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:17 localhost python3[240972]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Nov 23 04:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38196 DF PROTO=TCP SPT=44984 DPT=9101 SEQ=2153180187 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592EAD90000000001030307) 
Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:17 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:32:18 localhost systemd[1]: var-lib-containers-storage-overlay-ae91eec9c362b2c490fa8a14d0f5059208afabdd28cf60783bdf8d722c1b54ce-merged.mount: Deactivated successfully.
Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2455 DF PROTO=TCP SPT=38720 DPT=9100 SEQ=1810497964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7592F5570000000001030307) 
Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-572e34444311607b9314c97442135b544356d8a95d71aa7adf26ce39fbf50aaa-merged.mount: Deactivated successfully.
Nov 23 04:32:20 localhost systemd[1]: var-lib-containers-storage-overlay-572e34444311607b9314c97442135b544356d8a95d71aa7adf26ce39fbf50aaa-merged.mount: Deactivated successfully.
Nov 23 04:32:20 localhost podman[241317]: 2025-11-23 09:32:20.648718629 +0000 UTC m=+0.080292399 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 04:32:20 localhost podman[241318]: 2025-11-23 09:32:20.708091965 +0000 UTC m=+0.147657257 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 23 04:32:20 localhost podman[241317]: 2025-11-23 09:32:20.733165241 +0000 UTC m=+0.164739061 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 23 04:32:20 localhost podman[241318]: 2025-11-23 09:32:20.741124033 +0000 UTC m=+0.180689325 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:21 localhost systemd[1]: var-lib-containers-storage-overlay-83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7-merged.mount: Deactivated successfully.
Nov 23 04:32:22 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:22 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:23 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:32:23 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:32:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2457 DF PROTO=TCP SPT=38720 DPT=9100 SEQ=1810497964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759301590000000001030307) 
Nov 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:32:23 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:32:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:25 localhost python3.9[241451]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:32:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36113 DF PROTO=TCP SPT=48902 DPT=9105 SEQ=3077625583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759309D90000000001030307) 
Nov 23 04:32:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:25 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:25 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:26 localhost python3.9[241563]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:32:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:26 localhost podman[240144]: time="2025-11-23T09:32:26Z" level=error msg="Getting root fs size for \"3991a60c769119b6a8fde44ebada09d2b02a185d92d334e06f0a09819df8ffb0\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Nov 23 04:32:26 localhost python3.9[241672]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890346.1254406-2215-231496738955625/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:32:26 localhost systemd[1]: var-lib-containers-storage-overlay-ae91eec9c362b2c490fa8a14d0f5059208afabdd28cf60783bdf8d722c1b54ce-merged.mount: Deactivated successfully.
Nov 23 04:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:32:27 localhost systemd[1]: tmp-crun.55F4TM.mount: Deactivated successfully.
Nov 23 04:32:27 localhost podman[241727]: 2025-11-23 09:32:27.070131867 +0000 UTC m=+0.095412482 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:32:27 localhost podman[241727]: 2025-11-23 09:32:27.079019542 +0000 UTC m=+0.104300177 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:32:27 localhost python3.9[241728]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:32:27 localhost systemd[1]: Reloading.
Nov 23 04:32:27 localhost systemd-sysv-generator[241774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:32:27 localhost systemd-rc-local-generator[241769]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:27 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:32:28 localhost python3.9[241840]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:28 localhost systemd[1]: var-lib-containers-storage-overlay-870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9-merged.mount: Deactivated successfully.
Nov 23 04:32:29 localhost systemd[1]: var-lib-containers-storage-overlay-870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9-merged.mount: Deactivated successfully.
Nov 23 04:32:29 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:32:29 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:29 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:29 localhost systemd[1]: Reloading.
Nov 23 04:32:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36114 DF PROTO=TCP SPT=48902 DPT=9105 SEQ=3077625583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759319990000000001030307) 
Nov 23 04:32:29 localhost systemd-sysv-generator[241866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:32:29 localhost systemd-rc-local-generator[241862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:32:29 localhost systemd[1]: Starting openstack_network_exporter container...
Nov 23 04:32:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:29 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:30 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:31 localhost systemd[1]: Started libcrun container.
Nov 23 04:32:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1131328ca38466d331318e95e55a53a11e8781527cf4c00362f21121ad802/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 04:32:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1131328ca38466d331318e95e55a53a11e8781527cf4c00362f21121ad802/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 04:32:31 localhost podman[241892]: 2025-11-23 09:32:31.801904192 +0000 UTC m=+0.708990337 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:32:31 localhost podman[241892]: 2025-11-23 09:32:31.810813388 +0000 UTC m=+0.717899563 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:32:31 localhost podman[241881]: 2025-11-23 09:32:31.815852026 +0000 UTC m=+2.096538126 container init 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public)
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *bridge.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *coverage.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *datapath.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *iface.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *memory.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *ovnnorthd.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *ovn.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *ovsdbserver.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *pmd_perf.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *pmd_rxq.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: INFO    09:32:31 main.go:48: registering *vswitch.Collector
Nov 23 04:32:31 localhost openstack_network_exporter[241905]: NOTICE  09:32:31 main.go:82: listening on http://:9105/metrics
Nov 23 04:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:32:31 localhost podman[241881]: 2025-11-23 09:32:31.864955242 +0000 UTC m=+2.145641352 container start 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, distribution-scope=public, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 04:32:31 localhost podman[241881]: openstack_network_exporter
Nov 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:33 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:33 localhost systemd[1]: Started openstack_network_exporter container.
Nov 23 04:32:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:33 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:32:33 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:33 localhost podman[241921]: 2025-11-23 09:32:33.605344932 +0000 UTC m=+1.737715923 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:32:33 localhost podman[241921]: 2025-11-23 09:32:33.628737005 +0000 UTC m=+1.761107996 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Nov 23 04:32:34 localhost python3.9[242050]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:32:34 localhost systemd[1]: Stopping openstack_network_exporter container...
Nov 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2459 DF PROTO=TCP SPT=38720 DPT=9100 SEQ=1810497964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759330D90000000001030307) 
Nov 23 04:32:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:35 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: libpod-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.scope: Deactivated successfully.
Nov 23 04:32:36 localhost podman[242054]: 2025-11-23 09:32:36.071985955 +0000 UTC m=+1.695041811 container died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc.)
Nov 23 04:32:36 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.timer: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390-userdata-shm.mount: Deactivated successfully.
Nov 23 04:32:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36115 DF PROTO=TCP SPT=48902 DPT=9105 SEQ=3077625583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75933ADA0000000001030307) 
Nov 23 04:32:37 localhost systemd[1]: var-lib-containers-storage-overlay-a6b1131328ca38466d331318e95e55a53a11e8781527cf4c00362f21121ad802-merged.mount: Deactivated successfully.
Nov 23 04:32:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:37 localhost podman[242054]: 2025-11-23 09:32:37.946407336 +0000 UTC m=+3.569463162 container cleanup 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:32:37 localhost podman[242054]: openstack_network_exporter
Nov 23 04:32:38 localhost podman[242066]: 2025-11-23 09:32:38.013191699 +0000 UTC m=+1.938420404 container cleanup 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal)
Nov 23 04:32:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47666 DF PROTO=TCP SPT=55930 DPT=9102 SEQ=823047442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75933CD90000000001030307) 
Nov 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:38 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:39 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 23 04:32:39 localhost podman[242080]: 2025-11-23 09:32:39.168016606 +0000 UTC m=+0.079090998 container cleanup 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 04:32:39 localhost podman[242080]: openstack_network_exporter
Nov 23 04:32:39 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49955 DF PROTO=TCP SPT=43072 DPT=9882 SEQ=1083946168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759347EC0000000001030307) 
Nov 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:41 localhost systemd[1]: var-lib-containers-storage-overlay-f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee-merged.mount: Deactivated successfully.
Nov 23 04:32:41 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 23 04:32:41 localhost systemd[1]: Stopped openstack_network_exporter container.
Nov 23 04:32:41 localhost systemd[1]: Starting openstack_network_exporter container...
Nov 23 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:43 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:32:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57692 DF PROTO=TCP SPT=47756 DPT=9101 SEQ=2509244645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759350590000000001030307) 
Nov 23 04:32:43 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:43 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:43 localhost systemd[1]: Started libcrun container.
Nov 23 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1131328ca38466d331318e95e55a53a11e8781527cf4c00362f21121ad802/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 23 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1131328ca38466d331318e95e55a53a11e8781527cf4c00362f21121ad802/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 23 04:32:43 localhost podman[242104]: 2025-11-23 09:32:43.718647375 +0000 UTC m=+0.329219032 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:32:43 localhost podman[242104]: 2025-11-23 09:32:43.721855987 +0000 UTC m=+0.332427644 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 23 04:32:43 localhost podman[242104]: unhealthy
Nov 23 04:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:32:43 localhost podman[242092]: 2025-11-23 09:32:43.75550213 +0000 UTC m=+2.213079592 container init 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *bridge.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *coverage.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *datapath.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *iface.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *memory.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *ovnnorthd.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *ovn.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *ovsdbserver.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *pmd_perf.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *pmd_rxq.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: INFO    09:32:43 main.go:48: registering *vswitch.Collector
Nov 23 04:32:43 localhost openstack_network_exporter[242118]: NOTICE  09:32:43 main.go:82: listening on http://:9105/metrics
Nov 23 04:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:32:43 localhost podman[242092]: 2025-11-23 09:32:43.778611986 +0000 UTC m=+2.236189448 container start 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 04:32:43 localhost podman[242092]: openstack_network_exporter
Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:44 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:45 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:32:45 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9-merged.mount: Deactivated successfully.
Nov 23 04:32:46 localhost systemd[1]: var-lib-containers-storage-overlay-870d975636503a07ec195e49b00132bfc6eee6e29391d2ce8497d2068e2c55c9-merged.mount: Deactivated successfully.
Nov 23 04:32:46 localhost systemd[1]: Started openstack_network_exporter container.
Nov 23 04:32:46 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:32:46 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:32:46 localhost podman[242133]: 2025-11-23 09:32:46.264046248 +0000 UTC m=+2.481875912 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:32:46 localhost podman[242144]: 2025-11-23 09:32:46.295853754 +0000 UTC m=+0.455647470 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:32:46 localhost podman[242144]: 2025-11-23 09:32:46.307791107 +0000 UTC m=+0.467584833 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:32:46 localhost podman[242144]: unhealthy
Nov 23 04:32:46 localhost podman[242133]: 2025-11-23 09:32:46.326948313 +0000 UTC m=+2.544777957 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:47 localhost sshd[242192]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:32:47 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57693 DF PROTO=TCP SPT=47756 DPT=9101 SEQ=2509244645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759360190000000001030307) 
Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:48 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:48 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:32:48 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:32:48 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:49 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58578 DF PROTO=TCP SPT=41356 DPT=9100 SEQ=1311790558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75936A870000000001030307) 
Nov 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:32:51 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:52 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:52 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:32:52 localhost podman[240144]: time="2025-11-23T09:32:52Z" level=error msg="Getting root fs size for \"39f87b23405fa715e610ccb623706c55cabfaa47ef7100c36dbf45d725ba2f2f\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": creating overlay mount to /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/2H5N33XPGQU52ZJC6COYHHC23P,upperdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/diff,workdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/work,nodev,metacopy=on\": no such file or directory"
Nov 23 04:32:52 localhost python3.9[242335]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:32:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:32:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:32:53 localhost podman[242372]: 2025-11-23 09:32:53.170005207 +0000 UTC m=+0.074645064 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 04:32:53 localhost podman[242373]: 2025-11-23 09:32:53.227006513 +0000 UTC m=+0.132031570 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58580 DF PROTO=TCP SPT=41356 DPT=9100 SEQ=1311790558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759376990000000001030307) 
Nov 23 04:32:53 localhost podman[242372]: 2025-11-23 09:32:53.252612653 +0000 UTC m=+0.157252500 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:32:53 localhost podman[242373]: 2025-11-23 09:32:53.273154164 +0000 UTC m=+0.178179201 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:32:53 localhost systemd[1]: var-lib-containers-storage-overlay-fd19aa3bbf1d46933c6539044a339b8340627ee4c1548c2610024703ba9478a8-merged.mount: Deactivated successfully.
Nov 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:32:54 localhost sshd[242413]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee-merged.mount: Deactivated successfully.
Nov 23 04:32:54 localhost systemd[1]: var-lib-containers-storage-overlay-f74697c8bdf6b204cc9bb228876ae3d75c64bcbd0730efaa9813a6a90647aeee-merged.mount: Deactivated successfully.
Nov 23 04:32:54 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:32:54 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:32:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52019 DF PROTO=TCP SPT=46152 DPT=9105 SEQ=2669897876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75937ED90000000001030307) 
Nov 23 04:32:55 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully.
Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully.
Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully.
Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:56 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully.
Nov 23 04:32:57 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:58 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:32:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:32:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52020 DF PROTO=TCP SPT=46152 DPT=9105 SEQ=2669897876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75938E990000000001030307) 
Nov 23 04:32:59 localhost podman[242434]: 2025-11-23 09:32:59.419007862 +0000 UTC m=+0.073765823 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:32:59 localhost podman[242434]: 2025-11-23 09:32:59.453917117 +0000 UTC m=+0.108675028 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238-merged.mount: Deactivated successfully.
Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:33:00 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully.
Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:33:00 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:02 localhost systemd[1]: var-lib-containers-storage-overlay-5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254-merged.mount: Deactivated successfully.
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.546 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.546 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.557 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.558 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:03 localhost nova_compute[230084]: 2025-11-23 09:33:03.558 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:33:03 localhost podman[242455]: 2025-11-23 09:33:03.662808558 +0000 UTC m=+0.073663669 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 23 04:33:03 localhost podman[242455]: 2025-11-23 09:33:03.700947706 +0000 UTC m=+0.111802807 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:04 localhost nova_compute[230084]: 2025-11-23 09:33:04.545 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:04 localhost nova_compute[230084]: 2025-11-23 09:33:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:04 localhost nova_compute[230084]: 2025-11-23 09:33:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:04 localhost nova_compute[230084]: 2025-11-23 09:33:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:04 localhost nova_compute[230084]: 2025-11-23 09:33:04.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:04 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.545 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.566 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.566 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.566 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.566 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:33:05 localhost nova_compute[230084]: 2025-11-23 09:33:05.567 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:33:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58582 DF PROTO=TCP SPT=41356 DPT=9100 SEQ=1311790558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593A6DA0000000001030307) 
Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-fd19aa3bbf1d46933c6539044a339b8340627ee4c1548c2610024703ba9478a8-merged.mount: Deactivated successfully.
Nov 23 04:33:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.007 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.192 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.194 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13184MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.194 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.195 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.308 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.309 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.359 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.745 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.750 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.769 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.772 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:33:06 localhost nova_compute[230084]: 2025-11-23 09:33:06.772 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:07 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully.
Nov 23 04:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52021 DF PROTO=TCP SPT=46152 DPT=9105 SEQ=2669897876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593AEDA0000000001030307) 
Nov 23 04:33:08 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:33:08 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully.
Nov 23 04:33:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60748 DF PROTO=TCP SPT=51344 DPT=9102 SEQ=134881526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593B2DA0000000001030307) 
Nov 23 04:33:08 localhost systemd[1]: var-lib-containers-storage-overlay-0ef5cc2b89c8a41c643bbf27e239c40ba42d2785cdc67bc5e5d4e7b894568a96-merged.mount: Deactivated successfully.
Nov 23 04:33:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:33:09.230 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:33:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:33:09.230 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:33:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:33:09.230 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c-merged.mount: Deactivated successfully.
Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:09 localhost systemd[1]: var-lib-containers-storage-overlay-8bd5331fcbf7daa72bbe8d627667a57f81716bed82cfa9e39304d68f21f76c87-merged.mount: Deactivated successfully.
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:33:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33178 DF PROTO=TCP SPT=38046 DPT=9101 SEQ=1272164298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593B9640000000001030307) 
Nov 23 04:33:10 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:33:10 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-3e0acb8482e457c209ba092ee475b7db4ab9004896d495d3f7437d3c12bacbd3-merged.mount: Deactivated successfully.
Nov 23 04:33:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:12 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33180 DF PROTO=TCP SPT=38046 DPT=9101 SEQ=1272164298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593C5590000000001030307) 
Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:13 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:13 localhost podman[240144]: time="2025-11-23T09:33:13Z" level=error msg="Getting root fs size for \"53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy"
Nov 23 04:33:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:15 localhost systemd[1]: var-lib-containers-storage-overlay-36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2-merged.mount: Deactivated successfully.
Nov 23 04:33:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:33:16 localhost podman[242519]: 2025-11-23 09:33:16.433954596 +0000 UTC m=+0.090120057 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:16 localhost podman[242519]: 2025-11-23 09:33:16.597939557 +0000 UTC m=+0.254104998 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:33:16 localhost podman[242519]: unhealthy
Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:16 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33181 DF PROTO=TCP SPT=38046 DPT=9101 SEQ=1272164298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593D5190000000001030307) 
Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:18 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:18 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:33:18 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:33:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:33:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:33:18 localhost podman[242536]: 2025-11-23 09:33:18.933493285 +0000 UTC m=+0.083672203 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:33:18 localhost podman[242536]: 2025-11-23 09:33:18.942423442 +0000 UTC m=+0.092602380 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:33:18 localhost podman[242536]: unhealthy
Nov 23 04:33:18 localhost podman[242535]: 2025-11-23 09:33:18.912594565 +0000 UTC m=+0.070341236 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 23 04:33:18 localhost podman[242535]: 2025-11-23 09:33:18.997906809 +0000 UTC m=+0.155653490 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 04:33:19 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:19 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41385 DF PROTO=TCP SPT=50632 DPT=9100 SEQ=3613822610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593DFB60000000001030307) 
Nov 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:20 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:33:20 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:33:20 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:33:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:22 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:23 localhost systemd[1]: var-lib-containers-storage-overlay-8bd5331fcbf7daa72bbe8d627667a57f81716bed82cfa9e39304d68f21f76c87-merged.mount: Deactivated successfully.
Nov 23 04:33:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41387 DF PROTO=TCP SPT=50632 DPT=9100 SEQ=3613822610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593EBD90000000001030307) 
Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-b75d69f0f2cde9fc9824305dd942961655ee26dd5e86c5ce60bd1c2a9ea6511d-merged.mount: Deactivated successfully.
Nov 23 04:33:24 localhost systemd[1]: var-lib-containers-storage-overlay-b75d69f0f2cde9fc9824305dd942961655ee26dd5e86c5ce60bd1c2a9ea6511d-merged.mount: Deactivated successfully.
Nov 23 04:33:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:33:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:33:25 localhost podman[242577]: 2025-11-23 09:33:25.186059024 +0000 UTC m=+0.091608449 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 04:33:25 localhost podman[242577]: 2025-11-23 09:33:25.19396499 +0000 UTC m=+0.099514455 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:33:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63514 DF PROTO=TCP SPT=44226 DPT=9105 SEQ=4070313764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7593F41A0000000001030307) 
Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:26 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:26 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:26 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:33:26 localhost podman[242578]: 2025-11-23 09:33:26.51365714 +0000 UTC m=+1.415595331 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:33:26 localhost podman[242578]: 2025-11-23 09:33:26.552449395 +0000 UTC m=+1.454387566 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:28 localhost podman[240144]: time="2025-11-23T09:33:28Z" level=error msg="Getting root fs size for \"53f652f02e9e6ca8858723edfc3d1bae897b099591d9019e1fa8de6b93d75ee6\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy"
Nov 23 04:33:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:28 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:33:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63515 DF PROTO=TCP SPT=44226 DPT=9105 SEQ=4070313764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759403D90000000001030307) 
Nov 23 04:33:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:29 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:29 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:33:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:30 localhost podman[242615]: 2025-11-23 09:33:30.355520703 +0000 UTC m=+0.090205792 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:33:30 localhost podman[242615]: 2025-11-23 09:33:30.365858004 +0000 UTC m=+0.100543103 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:33:31 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:31 localhost systemd[1]: var-lib-containers-storage-overlay-36ed30db913d769e51862131ed0542c8a5042ec73389824bba392a46661e53c2-merged.mount: Deactivated successfully.
Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:33 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac-merged.mount: Deactivated successfully.
Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:33:34 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:35 localhost podman[242639]: 2025-11-23 09:33:35.091754083 +0000 UTC m=+0.125748502 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 23 04:33:35 localhost podman[242639]: 2025-11-23 09:33:35.102767751 +0000 UTC m=+0.136762160 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41389 DF PROTO=TCP SPT=50632 DPT=9100 SEQ=3613822610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75941CD90000000001030307) 
Nov 23 04:33:35 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:36 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:33:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63516 DF PROTO=TCP SPT=44226 DPT=9105 SEQ=4070313764 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759424D90000000001030307) 
Nov 23 04:33:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48996 DF PROTO=TCP SPT=49134 DPT=9102 SEQ=3341006658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759428DA0000000001030307) 
Nov 23 04:33:39 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:39 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:40 localhost systemd[1]: var-lib-containers-storage-overlay-b75d69f0f2cde9fc9824305dd942961655ee26dd5e86c5ce60bd1c2a9ea6511d-merged.mount: Deactivated successfully.
Nov 23 04:33:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34346 DF PROTO=TCP SPT=37626 DPT=9882 SEQ=1705850734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7594324C0000000001030307) 
Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519-merged.mount: Deactivated successfully.
Nov 23 04:33:42 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:42 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:42 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:42 localhost podman[240144]: time="2025-11-23T09:33:42Z" level=error msg="Getting root fs size for \"65c4a42c89bc73a3a93324769095fd35360f35c3c991ca9facc3a11bfac52690\": getting diffsize of layer \"3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34\" and its parent \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\": unmounting layer 3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34: replacing mount point \"/var/lib/containers/storage/overlay/3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34/merged\": device or resource busy"
Nov 23 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42863 DF PROTO=TCP SPT=42556 DPT=9101 SEQ=2930471661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75943A990000000001030307) 
Nov 23 04:33:44 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:45 localhost systemd[1]: var-lib-containers-storage-overlay-61ba4c39508a77773d58bccaa9f7155621160178a641e21dfc240667e2b172ac-merged.mount: Deactivated successfully.
Nov 23 04:33:45 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:45 localhost podman[240144]: time="2025-11-23T09:33:45Z" level=error msg="Getting root fs size for \"71a639099803e2740c2111ba3d7729a08e7aac1e702f9228b434374d5a2decd5\": getting diffsize of layer \"3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34\" and its parent \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\": unmounting layer 3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34: replacing mount point \"/var/lib/containers/storage/overlay/3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34/merged\": device or resource busy"
Nov 23 04:33:45 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae-merged.mount: Deactivated successfully.
Nov 23 04:33:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42864 DF PROTO=TCP SPT=42556 DPT=9101 SEQ=2930471661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75944A590000000001030307) 
Nov 23 04:33:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Nov 23 04:33:48 localhost systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Nov 23 04:33:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:33:48 localhost systemd[1]: tmp-crun.QvjK8I.mount: Deactivated successfully.
Nov 23 04:33:48 localhost podman[242658]: 2025-11-23 09:33:48.392766578 +0000 UTC m=+0.065039794 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:33:48 localhost podman[242658]: 2025-11-23 09:33:48.421246473 +0000 UTC m=+0.093519699 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:33:48 localhost podman[242658]: unhealthy
Nov 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:49 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:33:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:33:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54623 DF PROTO=TCP SPT=41532 DPT=9100 SEQ=1152475426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759454E70000000001030307) 
Nov 23 04:33:50 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:33:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:33:51 localhost systemd[1]: tmp-crun.uLTYwp.mount: Deactivated successfully.
Nov 23 04:33:51 localhost podman[242677]: 2025-11-23 09:33:51.050061936 +0000 UTC m=+0.092252145 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Nov 23 04:33:51 localhost podman[242678]: 2025-11-23 09:33:51.060044238 +0000 UTC m=+0.099318321 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:33:51 localhost podman[242677]: 2025-11-23 09:33:51.088898353 +0000 UTC m=+0.131088562 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, config_id=edpm, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:33:51 localhost podman[242678]: 2025-11-23 09:33:51.144629862 +0000 UTC m=+0.183903955 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:33:51 localhost podman[242678]: unhealthy
Nov 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:52 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:33:52 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:33:52 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54625 DF PROTO=TCP SPT=41532 DPT=9100 SEQ=1152475426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759460DA0000000001030307) 
Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:54 localhost systemd[1]: var-lib-containers-storage-overlay-3e5f46ececf4e2a45fe338d0e9975d8bf3a57f4252d09cb6730fa5bdc618f519-merged.mount: Deactivated successfully.
Nov 23 04:33:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55859 DF PROTO=TCP SPT=51198 DPT=9105 SEQ=37749654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759469590000000001030307) 
Nov 23 04:33:56 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:33:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:33:56 localhost systemd[1]: var-lib-containers-storage-overlay-f89b45405a25ad5b4e2d46e88df5b29e5f9747842d208330f6b1b95a66e4c65e-merged.mount: Deactivated successfully.
Nov 23 04:33:56 localhost systemd[1]: var-lib-containers-storage-overlay-f89b45405a25ad5b4e2d46e88df5b29e5f9747842d208330f6b1b95a66e4c65e-merged.mount: Deactivated successfully.
Nov 23 04:33:57 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:57 localhost podman[242767]: 2025-11-23 09:33:57.481279009 +0000 UTC m=+0.955386166 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Nov 23 04:33:57 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:33:57 localhost podman[242767]: 2025-11-23 09:33:57.515827553 +0000 UTC m=+0.989934630 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:33:58 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:33:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:33:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55860 DF PROTO=TCP SPT=51198 DPT=9105 SEQ=37749654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759479190000000001030307) 
Nov 23 04:33:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:33:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:33:59 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:33:59 localhost podman[242804]: 2025-11-23 09:33:59.534110018 +0000 UTC m=+0.445122991 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:33:59 localhost podman[242804]: 2025-11-23 09:33:59.609900691 +0000 UTC m=+0.520913624 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:00 localhost podman[240144]: time="2025-11-23T09:34:00Z" level=error msg="Getting root fs size for \"82025383cb9f5c74e5cfc1a92a0dd13d4771cecd58c130470a8b18a5611fc15b\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:00 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:00 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:34:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:34:01 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:01 localhost podman[242845]: 2025-11-23 09:34:01.230331082 +0000 UTC m=+0.086604868 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:34:01 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Nov 23 04:34:01 localhost podman[242845]: 2025-11-23 09:34:01.237909741 +0000 UTC m=+0.094183447 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:34:01 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:01 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:34:02 localhost systemd[1]: var-lib-containers-storage-overlay-ab58928ede0dd62dc1c6a47bb498643050414b3c034952943115aaac1d21638e-merged.mount: Deactivated successfully.
Nov 23 04:34:03 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully.
Nov 23 04:34:03 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully.
Nov 23 04:34:03 localhost systemd[1]: var-lib-containers-storage-overlay-94bc28862446c9d52bea5cb761ece58f4b7ce0b6f4d30585ec973efe7d5007f7-merged.mount: Deactivated successfully.
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.770 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.789 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.790 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.790 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.808 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:34:03 localhost nova_compute[230084]: 2025-11-23 09:34:03.809 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-f39a205d302a156a5f2a3aa4bc9925f1bc3511011e0b51cacf63cc0ce8fb46ae-merged.mount: Deactivated successfully.
Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully.
Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Nov 23 04:34:04 localhost systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Nov 23 04:34:04 localhost nova_compute[230084]: 2025-11-23 09:34:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:04 localhost nova_compute[230084]: 2025-11-23 09:34:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:05 localhost systemd[1]: var-lib-containers-storage-overlay-cb3e7a7b413bc69102c7d8435b32176125a43d794f5f87a0ef3f45710221e344-merged.mount: Deactivated successfully.
Nov 23 04:34:05 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully.
Nov 23 04:34:05 localhost systemd[1]: var-lib-containers-storage-overlay-7f2203bb12e8263bda654cf994f0f457cee81cbd85ac1474f70f0dcdab850bfc-merged.mount: Deactivated successfully.
Nov 23 04:34:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54627 DF PROTO=TCP SPT=41532 DPT=9100 SEQ=1152475426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759490DA0000000001030307) 
Nov 23 04:34:05 localhost nova_compute[230084]: 2025-11-23 09:34:05.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:05 localhost nova_compute[230084]: 2025-11-23 09:34:05.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:34:40 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:34:40 localhost rsyslogd[760]: imjournal: 220 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 23 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088-merged.mount: Deactivated successfully.
Nov 23 04:34:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=175 DF PROTO=TCP SPT=42776 DPT=9882 SEQ=3851767038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75951CAC0000000001030307) 
Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:41 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:34:42 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Nov 23 04:34:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10683 DF PROTO=TCP SPT=54088 DPT=9101 SEQ=197482578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759525190000000001030307) 
Nov 23 04:34:44 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:44 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:46 localhost podman[240144]: time="2025-11-23T09:34:46Z" level=error msg="Getting root fs size for \"9fc56eef0125e2366cfe8af1b8db0a7292c67970a357429cb5e6065236af301f\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-db5fdd560461b59dde387445faf8d301f89b69299c673ce8ac4c4cf7e6c32a08-merged.mount: Deactivated successfully.
Nov 23 04:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10684 DF PROTO=TCP SPT=54088 DPT=9101 SEQ=197482578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759534D90000000001030307) 
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Nov 23 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-e923447bbcace1356863835c0089a8ad58eff9a2f791c2262e7c0fcdcbc23235-merged.mount: Deactivated successfully.
Nov 23 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-e923447bbcace1356863835c0089a8ad58eff9a2f791c2262e7c0fcdcbc23235-merged.mount: Deactivated successfully.
Nov 23 04:34:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1911 DF PROTO=TCP SPT=45436 DPT=9100 SEQ=3922441646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75953F470000000001030307) 
Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-86ebbfbb226c83e40f7926e5ff88fb92af0ff068958d5052a05d57da8af7f4e7-merged.mount: Deactivated successfully.
Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:34:51 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Nov 23 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:34:52 localhost podman[243080]: 2025-11-23 09:34:52.173585251 +0000 UTC m=+0.082140777 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:52 localhost podman[243080]: 2025-11-23 09:34:52.218803818 +0000 UTC m=+0.127359264 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3)
Nov 23 04:34:52 localhost podman[243080]: unhealthy
Nov 23 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1913 DF PROTO=TCP SPT=45436 DPT=9100 SEQ=3922441646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75954B590000000001030307) 
Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:53 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:34:53 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:34:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:34:54 localhost systemd[1]: tmp-crun.7hsdFx.mount: Deactivated successfully.
Nov 23 04:34:54 localhost podman[243097]: 2025-11-23 09:34:54.186311229 +0000 UTC m=+0.094727409 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:34:54 localhost podman[243097]: 2025-11-23 09:34:54.190710534 +0000 UTC m=+0.099126734 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:34:54 localhost podman[243097]: unhealthy
Nov 23 04:34:54 localhost systemd[1]: tmp-crun.Sd0ohC.mount: Deactivated successfully.
Nov 23 04:34:54 localhost podman[243096]: 2025-11-23 09:34:54.293789851 +0000 UTC m=+0.204577143 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 23 04:34:54 localhost podman[243096]: 2025-11-23 09:34:54.314871345 +0000 UTC m=+0.225658607 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Nov 23 04:34:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:54 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:34:54 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:34:54 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-e7f1db98db621c4802b84f320b15cb4cfdf509b4923adf3a0c9430db7ca54f5b-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62623 DF PROTO=TCP SPT=59056 DPT=9105 SEQ=2641134334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759553990000000001030307) 
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-8966de692c5ef73ff7c49f2ba50482154d6194c0a24f7cd903edec55b84de088-merged.mount: Deactivated successfully.
Nov 23 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-b77c99c8f5ad929cf9fda4baf1f02e4b486405893a4fe6affbd8bef9d65bdac7-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62624 DF PROTO=TCP SPT=59056 DPT=9105 SEQ=2641134334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759563590000000001030307) 
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Nov 23 04:34:59 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:35:01 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:01 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:01 localhost podman[243184]: 2025-11-23 09:35:01.755745266 +0000 UTC m=+0.666689406 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:35:01 localhost podman[243184]: 2025-11-23 09:35:01.760761997 +0000 UTC m=+0.671706107 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.559 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.560 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.560 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:02 localhost nova_compute[230084]: 2025-11-23 09:35:02.567 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:02 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:35:02 localhost podman[243220]: 2025-11-23 09:35:02.792073056 +0000 UTC m=+0.694702051 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:35:02 localhost podman[243220]: 2025-11-23 09:35:02.880628611 +0000 UTC m=+0.783257656 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:35:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:03 localhost nova_compute[230084]: 2025-11-23 09:35:03.575 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:35:04 localhost podman[243261]: 2025-11-23 09:35:04.193515473 +0000 UTC m=+1.283704196 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:35:04 localhost podman[243261]: 2025-11-23 09:35:04.205003495 +0000 UTC m=+1.295192208 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-3ff2a78c8cc62b07a928d0b2b3f68754d6aca28a37f592a56866830a4a003509-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.546 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.546 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.556 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:35:04 localhost nova_compute[230084]: 2025-11-23 09:35:04.556 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:04 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:35:04 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully.
Nov 23 04:35:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1915 DF PROTO=TCP SPT=45436 DPT=9100 SEQ=3922441646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75957ADA0000000001030307) 
Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:35:05 localhost nova_compute[230084]: 2025-11-23 09:35:05.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully.
Nov 23 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-e923447bbcace1356863835c0089a8ad58eff9a2f791c2262e7c0fcdcbc23235-merged.mount: Deactivated successfully.
Nov 23 04:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62625 DF PROTO=TCP SPT=59056 DPT=9105 SEQ=2641134334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759582DA0000000001030307) 
Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:07 localhost nova_compute[230084]: 2025-11-23 09:35:07.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:07 localhost nova_compute[230084]: 2025-11-23 09:35:07.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:07 localhost nova_compute[230084]: 2025-11-23 09:35:07.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785-merged.mount: Deactivated successfully.
Nov 23 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785-merged.mount: Deactivated successfully.
Nov 23 04:35:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14885 DF PROTO=TCP SPT=44740 DPT=9102 SEQ=2217166311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759586D90000000001030307) 
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.563 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.563 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.564 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.564 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:35:08 localhost nova_compute[230084]: 2025-11-23 09:35:08.564 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.014 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.167 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.169 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13070MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.169 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.169 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:35:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:35:09.233 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:35:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:35:09.234 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:35:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:35:09.234 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.244 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.245 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.292 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.346 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.347 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.366 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.384 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: HW_CPU_X86_BMI,COMPUTE_NODE,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.406 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.866 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.872 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.887 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.889 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:35:09 localhost nova_compute[230084]: 2025-11-23 09:35:09.890 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:35:10 localhost podman[243330]: 2025-11-23 09:35:10.173734584 +0000 UTC m=+0.079531800 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:35:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:35:10 localhost podman[243330]: 2025-11-23 09:35:10.194849598 +0000 UTC m=+0.100646774 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 23 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4996 DF PROTO=TCP SPT=58570 DPT=9882 SEQ=2304447432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759591DC0000000001030307) 
Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Nov 23 04:35:11 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:11 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:11 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:12 localhost podman[240144]: time="2025-11-23T09:35:12Z" level=error msg="Getting root fs size for \"a9f6427e837ebe18ecaa4f38f8534486fcc8f6a32de3e2577b78a88883f2de64\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Nov 23 04:35:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:12 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50787 DF PROTO=TCP SPT=50502 DPT=9101 SEQ=3914229267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75959A190000000001030307) 
Nov 23 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-b77c99c8f5ad929cf9fda4baf1f02e4b486405893a4fe6affbd8bef9d65bdac7-merged.mount: Deactivated successfully.
Nov 23 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-b77c99c8f5ad929cf9fda4baf1f02e4b486405893a4fe6affbd8bef9d65bdac7-merged.mount: Deactivated successfully.
Nov 23 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Nov 23 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-f1efd3f89201e2ae0ff08eee424918b729c04d8c64dfb0844aa7373839c58b35-merged.mount: Deactivated successfully.
Nov 23 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Nov 23 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Nov 23 04:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8-merged.mount: Deactivated successfully.
Nov 23 04:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Nov 23 04:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50788 DF PROTO=TCP SPT=50502 DPT=9101 SEQ=3914229267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595A9D90000000001030307) 
Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Nov 23 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-3ff2a78c8cc62b07a928d0b2b3f68754d6aca28a37f592a56866830a4a003509-merged.mount: Deactivated successfully.
Nov 23 04:35:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17737 DF PROTO=TCP SPT=57740 DPT=9100 SEQ=799931032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595B4760000000001030307) 
Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91-merged.mount: Deactivated successfully.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17739 DF PROTO=TCP SPT=57740 DPT=9100 SEQ=799931032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595C0990000000001030307) 
Nov 23 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785-merged.mount: Deactivated successfully.
Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-21324da701cda3a12eecea997e418bbdcb417c66b3380ebb8a1f96a7c081e785-merged.mount: Deactivated successfully.
Nov 23 04:35:24 localhost podman[243349]: 2025-11-23 09:35:24.043404304 +0000 UTC m=+0.088124465 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 04:35:24 localhost podman[243349]: 2025-11-23 09:35:24.050831819 +0000 UTC m=+0.095552020 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 23 04:35:24 localhost podman[243349]: unhealthy
Nov 23 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe-merged.mount: Deactivated successfully.
Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe-merged.mount: Deactivated successfully.
Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:35:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56001 DF PROTO=TCP SPT=33874 DPT=9105 SEQ=3268768174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595C8D90000000001030307) 
Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Nov 23 04:35:25 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Nov 23 04:35:25 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:35:25 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:35:25 localhost podman[243365]: 2025-11-23 09:35:25.522131681 +0000 UTC m=+0.567336938 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Nov 23 04:35:25 localhost podman[243365]: 2025-11-23 09:35:25.563937407 +0000 UTC m=+0.609142674 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Nov 23 04:35:25 localhost podman[243366]: 2025-11-23 09:35:25.556189545 +0000 UTC m=+0.596937245 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:35:25 localhost podman[243366]: 2025-11-23 09:35:25.643126807 +0000 UTC m=+0.683874537 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:35:25 localhost podman[243366]: unhealthy
Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:26 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:35:26 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:35:26 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:35:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-0e4a8a7d9871b00def826b947fe67563fec7276b8de017c820e96afd9bc15049-merged.mount: Deactivated successfully.
Nov 23 04:35:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56002 DF PROTO=TCP SPT=33874 DPT=9105 SEQ=3268768174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595D8990000000001030307) 
Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-f1efd3f89201e2ae0ff08eee424918b729c04d8c64dfb0844aa7373839c58b35-merged.mount: Deactivated successfully.
Nov 23 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-f1efd3f89201e2ae0ff08eee424918b729c04d8c64dfb0844aa7373839c58b35-merged.mount: Deactivated successfully.
Nov 23 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Nov 23 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Nov 23 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:35:33 localhost sshd[243421]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:35:33 localhost systemd[1]: tmp-crun.iQAwgz.mount: Deactivated successfully.
Nov 23 04:35:33 localhost podman[243409]: 2025-11-23 09:35:33.192246063 +0000 UTC m=+0.095405882 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 04:35:33 localhost podman[243409]: 2025-11-23 09:35:33.224198348 +0000 UTC m=+0.127358127 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 23 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:35:35 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:35:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:35 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:35 localhost podman[243428]: 2025-11-23 09:35:35.058269417 +0000 UTC m=+0.746506350 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:35:35 localhost podman[243440]: 2025-11-23 09:35:35.128921374 +0000 UTC m=+0.126124084 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:35:35 localhost podman[243440]: 2025-11-23 09:35:35.135585199 +0000 UTC m=+0.132787919 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:35:35 localhost podman[243428]: 2025-11-23 09:35:35.156746949 +0000 UTC m=+0.844983852 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:35:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17741 DF PROTO=TCP SPT=57740 DPT=9100 SEQ=799931032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595F0D90000000001030307) 
Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:36 localhost podman[240144]: time="2025-11-23T09:35:36Z" level=error msg="Getting root fs size for \"bec134e398a11486951f3f61342c303faa1ad73300eef9ef7968c02dcdfb9071\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:37 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:35:37 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:35:37 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56003 DF PROTO=TCP SPT=33874 DPT=9105 SEQ=3268768174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595F8D90000000001030307) 
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39351 DF PROTO=TCP SPT=47534 DPT=9102 SEQ=3919728183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7595FCD90000000001030307) 
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay-93ac0215c9a9b4bb41dd367b1bc2b7778f57d0798ffad4b1a89d1bceb5bde4fe-merged.mount: Deactivated successfully.
Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51151 DF PROTO=TCP SPT=37466 DPT=9882 SEQ=669692224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596070D0000000001030307) 
Nov 23 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-84d72a79238da9e41e472230adf30122e356dada3dee1ed84822dcd8584621e6-merged.mount: Deactivated successfully.
Nov 23 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:41 localhost podman[243473]: 2025-11-23 09:35:41.670561477 +0000 UTC m=+0.077141681 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 04:35:41 localhost podman[243473]: 2025-11-23 09:35:41.709910686 +0000 UTC m=+0.116490870 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Nov 23 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-0e4a8a7d9871b00def826b947fe67563fec7276b8de017c820e96afd9bc15049-merged.mount: Deactivated successfully.
Nov 23 04:35:42 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:35:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51834 DF PROTO=TCP SPT=34612 DPT=9101 SEQ=1071061235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75960F5A0000000001030307) 
Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:45 localhost sshd[243490]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:35:45 localhost systemd-logind[761]: New session 56 of user zuul.
Nov 23 04:35:45 localhost systemd[1]: Started Session 56 of User zuul.
Nov 23 04:35:45 localhost python3.9[243586]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 23 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:47 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:47 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:47 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:35:47 localhost podman[240144]: time="2025-11-23T09:35:47Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged: invalid argument"
Nov 23 04:35:47 localhost podman[240144]: time="2025-11-23T09:35:47Z" level=error msg="Getting root fs size for \"e240e84de03758dccb40489aefdd81a78032646244b043a461a581d3be28a4a6\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": creating overlay mount to /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/FX77NTVUH5FWEFPOVFTPP2MBYL:/var/lib/containers/storage/overlay/l/3WOEMXHQAFPSYJGLHG4D7MH7G2:/var/lib/containers/storage/overlay/l/2H5N33XPGQU52ZJC6COYHHC23P,upperdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/diff,workdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/work,nodev,metacopy=on\": no such file or directory"
Nov 23 04:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51835 DF PROTO=TCP SPT=34612 DPT=9101 SEQ=1071061235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75961F190000000001030307) 
Nov 23 04:35:47 localhost sshd[243599]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:50 localhost systemd[1]: var-lib-containers-storage-overlay-84d72a79238da9e41e472230adf30122e356dada3dee1ed84822dcd8584621e6-merged.mount: Deactivated successfully.
Nov 23 04:35:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49516 DF PROTO=TCP SPT=47248 DPT=9100 SEQ=4237767716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759629A70000000001030307) 
Nov 23 04:35:50 localhost python3.9[243710]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:35:50 localhost systemd[1]: Started libpod-conmon-eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.scope.
Nov 23 04:35:50 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully.
Nov 23 04:35:50 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:50 localhost podman[243711]: 2025-11-23 09:35:50.87502024 +0000 UTC m=+0.137461953 container exec eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 04:35:50 localhost podman[243711]: 2025-11-23 09:35:50.882941799 +0000 UTC m=+0.145383562 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:35:52 localhost systemd[1]: libpod-conmon-eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.scope: Deactivated successfully.
Nov 23 04:35:52 localhost python3.9[243850]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:52 localhost systemd[1]: Started libpod-conmon-eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.scope.
Nov 23 04:35:52 localhost podman[243851]: 2025-11-23 09:35:52.509032032 +0000 UTC m=+0.122703564 container exec eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:35:52 localhost podman[243851]: 2025-11-23 09:35:52.542962368 +0000 UTC m=+0.156633900 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:35:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49518 DF PROTO=TCP SPT=47248 DPT=9100 SEQ=4237767716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759635990000000001030307) 
Nov 23 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully.
Nov 23 04:35:54 localhost python3.9[243989]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:35:54 localhost python3.9[244099]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 23 04:35:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55552 DF PROTO=TCP SPT=38158 DPT=9105 SEQ=1076383581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75963E190000000001030307) 
Nov 23 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3b29840d2c794dc0c0033b626822dc9158444d4c44499bc50ec992e63998d-merged.mount: Deactivated successfully.
Nov 23 04:35:55 localhost systemd[1]: libpod-conmon-eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.scope: Deactivated successfully.
Nov 23 04:35:56 localhost podman[244112]: 2025-11-23 09:35:56.004928937 +0000 UTC m=+0.269536694 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:35:56 localhost podman[244112]: 2025-11-23 09:35:56.040930348 +0000 UTC m=+0.305538095 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 23 04:35:56 localhost podman[244112]: unhealthy
Nov 23 04:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:35:58 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:35:58 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Failed with result 'exit-code'.
Nov 23 04:35:59 localhost podman[244130]: 2025-11-23 09:35:59.011437429 +0000 UTC m=+1.916483818 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Nov 23 04:35:59 localhost podman[244130]: 2025-11-23 09:35:59.049361971 +0000 UTC m=+1.954408320 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Nov 23 04:35:59 localhost podman[244131]: 2025-11-23 09:35:59.053228193 +0000 UTC m=+1.955847167 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:35:59 localhost podman[244131]: 2025-11-23 09:35:59.133635768 +0000 UTC m=+2.036254702 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:35:59 localhost podman[244131]: unhealthy
Nov 23 04:35:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55553 DF PROTO=TCP SPT=38158 DPT=9105 SEQ=1076383581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75964DDA0000000001030307) 
Nov 23 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:00 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:36:00 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 04:36:00 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Failed with result 'exit-code'.
Nov 23 04:36:01 localhost python3.9[244279]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:01 localhost systemd[1]: Started libpod-conmon-8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.scope.
Nov 23 04:36:01 localhost podman[244280]: 2025-11-23 09:36:01.744026072 +0000 UTC m=+0.117435754 container exec 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 04:36:01 localhost podman[244280]: 2025-11-23 09:36:01.779966961 +0000 UTC m=+0.153376673 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:04 localhost python3.9[244461]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:04 localhost systemd[1]: libpod-conmon-8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.scope: Deactivated successfully.
Nov 23 04:36:04 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:36:04 localhost systemd[1]: Started libpod-conmon-8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.scope.
Nov 23 04:36:04 localhost podman[244462]: 2025-11-23 09:36:04.442168965 +0000 UTC m=+0.312566481 container exec 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:36:04 localhost podman[244462]: 2025-11-23 09:36:04.450172267 +0000 UTC m=+0.320569813 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49520 DF PROTO=TCP SPT=47248 DPT=9100 SEQ=4237767716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759664D90000000001030307) 
Nov 23 04:36:05 localhost podman[244546]: 2025-11-23 09:36:05.348717342 +0000 UTC m=+0.249426023 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:36:05 localhost podman[244546]: 2025-11-23 09:36:05.385016951 +0000 UTC m=+0.285725722 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:36:05 localhost nova_compute[230084]: 2025-11-23 09:36:05.886 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:05 localhost nova_compute[230084]: 2025-11-23 09:36:05.887 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:05 localhost nova_compute[230084]: 2025-11-23 09:36:05.887 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:06 localhost python3.9[244676]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.554 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.555 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.555 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.563 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:36:06 localhost nova_compute[230084]: 2025-11-23 09:36:06.564 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:06 localhost python3.9[244786]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 23 04:36:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:36:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b-merged.mount: Deactivated successfully.
Nov 23 04:36:07 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:36:07 localhost systemd[1]: libpod-conmon-8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.scope: Deactivated successfully.
Nov 23 04:36:07 localhost podman[244798]: 2025-11-23 09:36:07.809655247 +0000 UTC m=+0.714226596 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:36:07 localhost podman[244798]: 2025-11-23 09:36:07.850892396 +0000 UTC m=+0.755463695 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55554 DF PROTO=TCP SPT=38158 DPT=9105 SEQ=1076383581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75966ED90000000001030307) 
Nov 23 04:36:08 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:08 localhost nova_compute[230084]: 2025-11-23 09:36:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:08 localhost nova_compute[230084]: 2025-11-23 09:36:08.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:36:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45041 DF PROTO=TCP SPT=54070 DPT=9102 SEQ=2529215924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759672D90000000001030307) 
Nov 23 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully.
Nov 23 04:36:09 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully.
Nov 23 04:36:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:36:09.235 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:36:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:36:09.235 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:36:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:36:09.235 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.570 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.570 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.571 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.571 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:36:09 localhost nova_compute[230084]: 2025-11-23 09:36:09.571 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.024 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.203 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.204 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12994MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.205 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.205 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.266 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.266 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.282 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.727 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.732 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.752 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.755 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:36:10 localhost nova_compute[230084]: 2025-11-23 09:36:10.755 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3b29840d2c794dc0c0033b626822dc9158444d4c44499bc50ec992e63998d-merged.mount: Deactivated successfully.
Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-6ac3b29840d2c794dc0c0033b626822dc9158444d4c44499bc50ec992e63998d-merged.mount: Deactivated successfully.
Nov 23 04:36:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30825 DF PROTO=TCP SPT=59904 DPT=9882 SEQ=3524692401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75967C3C0000000001030307) 
Nov 23 04:36:11 localhost nova_compute[230084]: 2025-11-23 09:36:11.756 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:36:11 localhost python3.9[245025]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:36:12 localhost systemd[1]: Started libpod-conmon-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.scope.
Nov 23 04:36:12 localhost podman[245026]: 2025-11-23 09:36:12.125851289 +0000 UTC m=+0.127533991 container exec aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:36:12 localhost podman[245026]: 2025-11-23 09:36:12.156084698 +0000 UTC m=+0.157767440 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 23 04:36:12 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully.
Nov 23 04:36:12 localhost podman[244799]: 2025-11-23 09:36:12.512829056 +0000 UTC m=+5.415410493 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 04:36:12 localhost podman[244799]: 2025-11-23 09:36:12.595705836 +0000 UTC m=+5.498287293 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:36:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42759 DF PROTO=TCP SPT=50604 DPT=9101 SEQ=816460270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759684990000000001030307) 
Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:36:13 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:36:13 localhost systemd[1]: libpod-conmon-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.scope: Deactivated successfully.
Nov 23 04:36:14 localhost podman[245066]: 2025-11-23 09:36:14.020117399 +0000 UTC m=+0.924543604 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:36:14 localhost podman[245066]: 2025-11-23 09:36:14.031023267 +0000 UTC m=+0.935449492 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:36:14 localhost python3.9[245196]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Nov 23 04:36:16 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:36:16 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:36:16 localhost systemd[1]: Started libpod-conmon-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.scope.
Nov 23 04:36:16 localhost podman[245197]: 2025-11-23 09:36:16.102787466 +0000 UTC m=+1.446507197 container exec aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:36:16 localhost podman[245197]: 2025-11-23 09:36:16.112907984 +0000 UTC m=+1.456627725 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Nov 23 04:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42760 DF PROTO=TCP SPT=50604 DPT=9101 SEQ=816460270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759694590000000001030307) 
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:17 localhost systemd[1]: libpod-conmon-aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.scope: Deactivated successfully.
Nov 23 04:36:18 localhost python3.9[245333]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a-merged.mount: Deactivated successfully.
Nov 23 04:36:18 localhost python3.9[245443]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 23 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a-merged.mount: Deactivated successfully.
Nov 23 04:36:18 localhost podman[240144]: time="2025-11-23T09:36:18Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\""
Nov 23 04:36:18 localhost podman[240144]: @ - - [23/Nov/2025:09:31:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1"
Nov 23 04:36:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60046 DF PROTO=TCP SPT=34996 DPT=9100 SEQ=3787577550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75969ED70000000001030307) 
Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b-merged.mount: Deactivated successfully.
Nov 23 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-ebc5390a8adc6e86ab1b1a49f0293db52dd1bfcfe84928a36760e9d55b28d63b-merged.mount: Deactivated successfully.
Nov 23 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-0ff11ed3154c8bbd91096301c9cfc5b95bbe726d99c5650ba8d355053fb0bbad-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost python3.9[245562]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:22 localhost systemd[1]: Started libpod-conmon-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope.
Nov 23 04:36:22 localhost podman[245563]: 2025-11-23 09:36:22.304780955 +0000 UTC m=+0.106766443 container exec 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm)
Nov 23 04:36:22 localhost podman[245563]: 2025-11-23 09:36:22.338968798 +0000 UTC m=+0.140954306 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6-merged.mount: Deactivated successfully.
Nov 23 04:36:22 localhost systemd[1]: libpod-conmon-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope: Deactivated successfully.
Nov 23 04:36:23 localhost python3.9[245702]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60048 DF PROTO=TCP SPT=34996 DPT=9100 SEQ=3787577550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596AADA0000000001030307) 
Nov 23 04:36:23 localhost systemd[1]: Started libpod-conmon-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope.
Nov 23 04:36:23 localhost podman[245703]: 2025-11-23 09:36:23.301380792 +0000 UTC m=+0.104590666 container exec 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 04:36:23 localhost podman[245703]: 2025-11-23 09:36:23.333796548 +0000 UTC m=+0.137006412 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-d16160b7dcc2f7ec400dce38b825ab93d5279c0ca0a9a7ff351e435b4aeeea92-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost python3.9[245841]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost systemd[1]: libpod-conmon-4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.scope: Deactivated successfully.
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost python3.9[245951]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Nov 23 04:36:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43274 DF PROTO=TCP SPT=54394 DPT=9105 SEQ=2167386403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596B35A0000000001030307) 
Nov 23 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Nov 23 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a-merged.mount: Deactivated successfully.
Nov 23 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-e5c7be86f522835b38ebdead166e5392bec044956b756fc70136b6abae4f549a-merged.mount: Deactivated successfully.
Nov 23 04:36:25 localhost podman[240144]: @ - - [23/Nov/2025:09:31:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140421 "" "Go-http-client/1.1"
Nov 23 04:36:25 localhost podman_exporter[240364]: ts=2025-11-23T09:36:25.978Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 23 04:36:25 localhost podman_exporter[240364]: ts=2025-11-23T09:36:25.979Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 23 04:36:25 localhost podman_exporter[240364]: ts=2025-11-23T09:36:25.979Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Nov 23 04:36:25 localhost python3.9[246074]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:26 localhost systemd[1]: Started libpod-conmon-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.scope.
Nov 23 04:36:26 localhost podman[246079]: 2025-11-23 09:36:26.107702814 +0000 UTC m=+0.101684659 container exec bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:36:26 localhost podman[246079]: 2025-11-23 09:36:26.140183322 +0000 UTC m=+0.134165117 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:36:26 localhost systemd[1]: libpod-conmon-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.scope: Deactivated successfully.
Nov 23 04:36:26 localhost python3.9[246218]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:26 localhost systemd[1]: Started libpod-conmon-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.scope.
Nov 23 04:36:26 localhost podman[246219]: 2025-11-23 09:36:26.896111138 +0000 UTC m=+0.096979854 container exec bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:36:26 localhost podman[246219]: 2025-11-23 09:36:26.928483304 +0000 UTC m=+0.129352020 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:36:26 localhost systemd[1]: libpod-conmon-bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.scope: Deactivated successfully.
Nov 23 04:36:27 localhost python3.9[246360]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:28 localhost python3.9[246470]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 23 04:36:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:36:29 localhost python3.9[246593]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:29 localhost podman[246594]: 2025-11-23 09:36:29.177773395 +0000 UTC m=+0.079268046 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:36:29 localhost podman[246594]: 2025-11-23 09:36:29.191882758 +0000 UTC m=+0.093377419 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Nov 23 04:36:29 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:36:29 localhost systemd[1]: Started libpod-conmon-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.scope.
Nov 23 04:36:29 localhost podman[246607]: 2025-11-23 09:36:29.290163476 +0000 UTC m=+0.112459904 container exec 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:36:29 localhost podman[246607]: 2025-11-23 09:36:29.323131856 +0000 UTC m=+0.145428274 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:36:29 localhost systemd[1]: libpod-conmon-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.scope: Deactivated successfully.
Nov 23 04:36:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43275 DF PROTO=TCP SPT=54394 DPT=9105 SEQ=2167386403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596C3190000000001030307) 
Nov 23 04:36:30 localhost python3.9[246752]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:30 localhost systemd[1]: Started libpod-conmon-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.scope.
Nov 23 04:36:30 localhost podman[246753]: 2025-11-23 09:36:30.160209977 +0000 UTC m=+0.081830163 container exec 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:36:30 localhost podman[246753]: 2025-11-23 09:36:30.191984347 +0000 UTC m=+0.113604603 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:36:30 localhost systemd[1]: libpod-conmon-7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.scope: Deactivated successfully.
Nov 23 04:36:30 localhost python3.9[246893]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:36:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:36:31 localhost systemd[1]: tmp-crun.aakACN.mount: Deactivated successfully.
Nov 23 04:36:31 localhost podman[246950]: 2025-11-23 09:36:31.171494372 +0000 UTC m=+0.074157470 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:36:31 localhost podman[246949]: 2025-11-23 09:36:31.181026925 +0000 UTC m=+0.082176044 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 23 04:36:31 localhost podman[246950]: 2025-11-23 09:36:31.184039404 +0000 UTC m=+0.086702572 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:36:31 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:36:31 localhost podman[246949]: 2025-11-23 09:36:31.222955873 +0000 UTC m=+0.124104992 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, release=1755695350, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Nov 23 04:36:31 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:36:31 localhost python3.9[247047]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 23 04:36:32 localhost python3.9[247169]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:32 localhost systemd[1]: Started libpod-conmon-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.scope.
Nov 23 04:36:32 localhost podman[247170]: 2025-11-23 09:36:32.336476949 +0000 UTC m=+0.103425174 container exec 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, version=9.6)
Nov 23 04:36:32 localhost podman[247170]: 2025-11-23 09:36:32.343973707 +0000 UTC m=+0.110921962 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 04:36:32 localhost systemd[1]: libpod-conmon-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.scope: Deactivated successfully.
Nov 23 04:36:33 localhost python3.9[247309]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 23 04:36:33 localhost systemd[1]: Started libpod-conmon-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.scope.
Nov 23 04:36:33 localhost podman[247310]: 2025-11-23 09:36:33.159119229 +0000 UTC m=+0.102624413 container exec 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64)
Nov 23 04:36:33 localhost podman[247310]: 2025-11-23 09:36:33.193016255 +0000 UTC m=+0.136521439 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6)
Nov 23 04:36:33 localhost systemd[1]: libpod-conmon-02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.scope: Deactivated successfully.
Nov 23 04:36:33 localhost python3.9[247448]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:33 localhost sshd[247466]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:36:34 localhost python3.9[247560]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:35 localhost python3.9[247670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60050 DF PROTO=TCP SPT=34996 DPT=9100 SEQ=3787577550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596DAD90000000001030307) 
Nov 23 04:36:35 localhost python3.9[247758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890594.976165-3070-223548269246655/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:36 localhost python3.9[247868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:37 localhost python3.9[247978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43276 DF PROTO=TCP SPT=54394 DPT=9105 SEQ=2167386403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596E2D90000000001030307) 
Nov 23 04:36:37 localhost python3.9[248035]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:36:38 localhost podman[248072]: 2025-11-23 09:36:38.181287588 +0000 UTC m=+0.084877964 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:36:38 localhost podman[248072]: 2025-11-23 09:36:38.217865774 +0000 UTC m=+0.121456160 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 04:36:38 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:36:38 localhost python3.9[248163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5995 DF PROTO=TCP SPT=33432 DPT=9102 SEQ=502731620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596E6D90000000001030307) 
Nov 23 04:36:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:36:38 localhost systemd[1]: tmp-crun.04Q2nj.mount: Deactivated successfully.
Nov 23 04:36:38 localhost podman[248221]: 2025-11-23 09:36:38.885208201 +0000 UTC m=+0.088590993 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:36:38 localhost podman[248221]: 2025-11-23 09:36:38.936106106 +0000 UTC m=+0.139488938 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:36:38 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:36:38 localhost python3.9[248220]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fvey8xx4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:39 localhost auditd[727]: Audit daemon rotating log files
Nov 23 04:36:39 localhost python3.9[248353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:40 localhost python3.9[248410]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:40 localhost python3.9[248520]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:36:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61420 DF PROTO=TCP SPT=58076 DPT=9882 SEQ=2762485943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596F16D0000000001030307) 
Nov 23 04:36:42 localhost sshd[248555]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:36:42 localhost python3[248633]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 23 04:36:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4517 DF PROTO=TCP SPT=41870 DPT=9101 SEQ=75295506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7596F9DA0000000001030307) 
Nov 23 04:36:43 localhost python3.9[248743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:43 localhost python3.9[248800]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:36:44 localhost podman[248802]: 2025-11-23 09:36:44.113003624 +0000 UTC m=+0.075039604 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:36:44 localhost podman[248802]: 2025-11-23 09:36:44.147850154 +0000 UTC m=+0.109886124 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:36:44 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:36:45 localhost python3.9[248937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:36:46 localhost python3.9[248994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:46 localhost podman[248995]: 2025-11-23 09:36:46.167701922 +0000 UTC m=+0.073591206 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Nov 23 04:36:46 localhost podman[248995]: 2025-11-23 09:36:46.183849039 +0000 UTC m=+0.089738313 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 04:36:46 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:36:46 localhost python3.9[249124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:47 localhost python3.9[249181]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4518 DF PROTO=TCP SPT=41870 DPT=9101 SEQ=75295506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759709990000000001030307) 
Nov 23 04:36:48 localhost python3.9[249291]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:48 localhost python3.9[249348]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:49 localhost python3.9[249458]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:36:50 localhost python3.9[249548]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763890608.8491733-3444-147953096691809/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47227 DF PROTO=TCP SPT=39366 DPT=9100 SEQ=4096548919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759714070000000001030307) 
Nov 23 04:36:50 localhost python3.9[249658]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:51 localhost python3.9[249768]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:36:52 localhost python3.9[249881]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47229 DF PROTO=TCP SPT=39366 DPT=9100 SEQ=4096548919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597201A0000000001030307) 
Nov 23 04:36:53 localhost python3.9[249991]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:36:54 localhost python3.9[250102]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:36:54 localhost python3.9[250214]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: ERROR   09:36:55 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: ERROR   09:36:55 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: 
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: ERROR   09:36:55 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: ERROR   09:36:55 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: ERROR   09:36:55 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:36:55 localhost openstack_network_exporter[242118]: 
Nov 23 04:36:55 localhost python3.9[250331]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:36:56 localhost systemd-logind[761]: Session 56 logged out. Waiting for processes to exit.
Nov 23 04:36:56 localhost systemd[1]: session-56.scope: Deactivated successfully.
Nov 23 04:36:56 localhost systemd[1]: session-56.scope: Consumed 29.919s CPU time.
Nov 23 04:36:56 localhost systemd-logind[761]: Removed session 56.
Nov 23 04:36:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31795 DF PROTO=TCP SPT=49398 DPT=9102 SEQ=2011471280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75972C590000000001030307) 
Nov 23 04:36:57 localhost sshd[250349]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:37:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:37:00 localhost podman[250351]: 2025-11-23 09:37:00.184288423 +0000 UTC m=+0.089000493 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:00 localhost podman[250351]: 2025-11-23 09:37:00.200888762 +0000 UTC m=+0.105600812 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:37:00 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:37:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31796 DF PROTO=TCP SPT=49398 DPT=9102 SEQ=2011471280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75973C1A0000000001030307) 
Nov 23 04:37:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:37:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:37:02 localhost systemd[1]: tmp-crun.jwQHCE.mount: Deactivated successfully.
Nov 23 04:37:02 localhost podman[250372]: 2025-11-23 09:37:02.190213274 +0000 UTC m=+0.093174753 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:37:02 localhost podman[250372]: 2025-11-23 09:37:02.19611706 +0000 UTC m=+0.099078529 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:37:02 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:37:02 localhost podman[250371]: 2025-11-23 09:37:02.27367679 +0000 UTC m=+0.180854041 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, version=9.6, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 04:37:02 localhost podman[250371]: 2025-11-23 09:37:02.286677203 +0000 UTC m=+0.193854444 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 23 04:37:02 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:37:02 localhost sshd[250415]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:37:02 localhost systemd-logind[761]: New session 57 of user zuul.
Nov 23 04:37:02 localhost systemd[1]: Started Session 57 of User zuul.
Nov 23 04:37:03 localhost python3.9[250528]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:04 localhost python3.9[250638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:04 localhost nova_compute[230084]: 2025-11-23 09:37:04.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:04 localhost nova_compute[230084]: 2025-11-23 09:37:04.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:04 localhost python3.9[250748]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:05 localhost python3.9[250856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:06 localhost python3.9[250942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890624.8807175-107-254942496781279/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:06 localhost nova_compute[230084]: 2025-11-23 09:37:06.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:06 localhost nova_compute[230084]: 2025-11-23 09:37:06.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:06 localhost python3.9[251050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:07 localhost python3.9[251136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890626.3394856-151-194843193682656/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:07 localhost python3.9[251244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:08 localhost python3.9[251330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890627.4052548-151-250865598903501/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.558 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.559 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:08 localhost nova_compute[230084]: 2025-11-23 09:37:08.559 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:37:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:37:08 localhost podman[251457]: 2025-11-23 09:37:08.769001569 +0000 UTC m=+0.102104039 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:37:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31797 DF PROTO=TCP SPT=49398 DPT=9102 SEQ=2011471280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75975CD90000000001030307) 
Nov 23 04:37:08 localhost podman[251457]: 2025-11-23 09:37:08.803394697 +0000 UTC m=+0.136497167 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:37:08 localhost python3.9[251442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:08 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:37:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:37:09 localhost podman[251574]: 2025-11-23 09:37:09.187665413 +0000 UTC m=+0.063971981 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:37:09 localhost podman[251574]: 2025-11-23 09:37:09.197859892 +0000 UTC m=+0.074166460 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:37:09 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:37:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:37:09.236 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:37:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:37:09.237 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:37:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:37:09.237 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:37:09 localhost python3.9[251607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890628.4085765-151-82971030671697/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=65bf67241e64afb7723fe0e4191d8837b56f04b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.572 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.573 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.573 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.574 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:37:09 localhost nova_compute[230084]: 2025-11-23 09:37:09.574 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.023 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:37:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.232 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.233 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=13085MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.234 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.234 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.306 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.306 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.326 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:37:10 localhost python3.9[251800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.782 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.788 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.801 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.803 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:37:10 localhost nova_compute[230084]: 2025-11-23 09:37:10.804 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:37:11 localhost python3.9[251888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890630.3411994-327-204412094304017/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=1067e04911e84d9dc262158a63dd8e464b0e5dfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:11 localhost podman[240144]: time="2025-11-23T09:37:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:37:11 localhost podman[240144]: @ - - [23/Nov/2025:09:37:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142076 "" "Go-http-client/1.1"
Nov 23 04:37:11 localhost podman[240144]: @ - - [23/Nov/2025:09:37:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15858 "" "Go-http-client/1.1"
Nov 23 04:37:11 localhost nova_compute[230084]: 2025-11-23 09:37:11.804 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:37:11 localhost python3.9[251997]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:37:12 localhost python3.9[252109]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:13 localhost python3.9[252219]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:13 localhost python3.9[252276]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:14 localhost python3.9[252386]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:37:14 localhost podman[252444]: 2025-11-23 09:37:14.618731868 +0000 UTC m=+0.079351288 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 23 04:37:14 localhost podman[252444]: 2025-11-23 09:37:14.654180865 +0000 UTC m=+0.114800275 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:14 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:37:14 localhost python3.9[252443]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:15 localhost python3.9[252576]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:37:16 localhost systemd[1]: tmp-crun.nBNwE4.mount: Deactivated successfully.
Nov 23 04:37:16 localhost podman[252687]: 2025-11-23 09:37:16.67771491 +0000 UTC m=+0.091195930 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 23 04:37:16 localhost podman[252687]: 2025-11-23 09:37:16.692891252 +0000 UTC m=+0.106372272 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:16 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:37:16 localhost python3.9[252686]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:17 localhost python3.9[252762]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:17 localhost python3.9[252872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:18 localhost sshd[252930]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:37:18 localhost python3.9[252929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:20 localhost python3.9[253041]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:37:20 localhost systemd[1]: Reloading.
Nov 23 04:37:20 localhost systemd-rc-local-generator[253063]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:37:20 localhost systemd-sysv-generator[253072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:21 localhost sshd[253168]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:37:21 localhost python3.9[253190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: ERROR   09:37:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: ERROR   09:37:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: ERROR   09:37:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: ERROR   09:37:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: ERROR   09:37:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:37:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:37:22 localhost python3.9[253248]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:23 localhost python3.9[253358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41642 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597956F0000000001030307) 
Nov 23 04:37:23 localhost python3.9[253415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41643 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759799590000000001030307) 
Nov 23 04:37:24 localhost python3.9[253525]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:37:24 localhost systemd[1]: Reloading.
Nov 23 04:37:24 localhost systemd-rc-local-generator[253548]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:37:24 localhost systemd-sysv-generator[253553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:24 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:37:24 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:37:24 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:37:24 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:37:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31798 DF PROTO=TCP SPT=49398 DPT=9102 SEQ=2011471280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75979CD90000000001030307) 
Nov 23 04:37:25 localhost python3.9[253676]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:37:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41644 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597A1590000000001030307) 
Nov 23 04:37:26 localhost python3.9[253786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5997 DF PROTO=TCP SPT=33432 DPT=9102 SEQ=502731620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597A4DA0000000001030307) 
Nov 23 04:37:27 localhost python3.9[253874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890646.0649145-736-141621285015179/.source.json _original_basename=.vds2qluj follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:28 localhost python3.9[253984]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:30 localhost python3.9[254292]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Nov 23 04:37:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41645 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597B1190000000001030307) 
Nov 23 04:37:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:37:31 localhost podman[254403]: 2025-11-23 09:37:31.17590737 +0000 UTC m=+0.092712891 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:37:31 localhost podman[254403]: 2025-11-23 09:37:31.189818637 +0000 UTC m=+0.106624218 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:31 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:37:31 localhost python3.9[254402]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:37:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:37:32 localhost systemd[1]: tmp-crun.cRGpKz.mount: Deactivated successfully.
Nov 23 04:37:32 localhost podman[254532]: 2025-11-23 09:37:32.932016738 +0000 UTC m=+0.101358700 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:37:32 localhost podman[254532]: 2025-11-23 09:37:32.96689953 +0000 UTC m=+0.136241472 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:37:32 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:37:32 localhost podman[254533]: 2025-11-23 09:37:32.978468355 +0000 UTC m=+0.145236959 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:37:32 localhost podman[254533]: 2025-11-23 09:37:32.986810276 +0000 UTC m=+0.153578910 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:37:32 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:37:33 localhost python3.9[254531]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:37:37 localhost python3[254713]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:37:37 localhost podman[254748]: 
Nov 23 04:37:37 localhost podman[254748]: 2025-11-23 09:37:37.695799359 +0000 UTC m=+0.089666170 container create f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent)
Nov 23 04:37:37 localhost podman[254748]: 2025-11-23 09:37:37.652610788 +0000 UTC m=+0.046477629 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Nov 23 04:37:37 localhost python3[254713]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Nov 23 04:37:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41646 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7597D0D90000000001030307) 
Nov 23 04:37:38 localhost python3.9[254894]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:37:39 localhost systemd[1]: tmp-crun.4F0eiv.mount: Deactivated successfully.
Nov 23 04:37:39 localhost podman[255004]: 2025-11-23 09:37:39.200240687 +0000 UTC m=+0.101573925 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 04:37:39 localhost podman[255004]: 2025-11-23 09:37:39.234975556 +0000 UTC m=+0.136308804 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:37:39 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:37:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:37:39 localhost python3.9[255017]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:39 localhost systemd[1]: tmp-crun.PdvHk6.mount: Deactivated successfully.
Nov 23 04:37:39 localhost podman[255026]: 2025-11-23 09:37:39.353932859 +0000 UTC m=+0.079293916 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:37:39 localhost podman[255026]: 2025-11-23 09:37:39.39106812 +0000 UTC m=+0.116429137 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:37:39 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:37:39 localhost python3.9[255102]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:37:40 localhost python3.9[255211]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890659.8469315-1000-201438145874294/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:37:41 localhost python3.9[255266]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:37:41 localhost systemd[1]: Reloading.
Nov 23 04:37:41 localhost systemd-rc-local-generator[255289]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:37:41 localhost systemd-sysv-generator[255294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:41 localhost podman[240144]: time="2025-11-23T09:37:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:37:41 localhost podman[240144]: @ - - [23/Nov/2025:09:37:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144032 "" "Go-http-client/1.1"
Nov 23 04:37:41 localhost podman[240144]: @ - - [23/Nov/2025:09:37:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16185 "" "Go-http-client/1.1"
Nov 23 04:37:42 localhost python3.9[255357]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:37:43 localhost systemd[1]: Reloading.
Nov 23 04:37:43 localhost systemd-rc-local-generator[255384]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:37:43 localhost systemd-sysv-generator[255387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:37:43 localhost systemd[1]: Starting neutron_sriov_agent container...
Nov 23 04:37:43 localhost systemd[1]: Started libcrun container.
Nov 23 04:37:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f50f9bf7e95693f6c87863dbf41af734f6cdf0bf57acdc68e9c6c5e0b2f410c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 04:37:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f50f9bf7e95693f6c87863dbf41af734f6cdf0bf57acdc68e9c6c5e0b2f410c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:37:43 localhost podman[255398]: 2025-11-23 09:37:43.566981878 +0000 UTC m=+0.110770106 container init f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_sriov_agent)
Nov 23 04:37:43 localhost podman[255398]: 2025-11-23 09:37:43.576693465 +0000 UTC m=+0.120481673 container start f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:43 localhost podman[255398]: neutron_sriov_agent
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + sudo -E kolla_set_configs
Nov 23 04:37:43 localhost systemd[1]: Started neutron_sriov_agent container.
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Validating config file
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Copying service configuration files
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Writing out command to execute
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: ++ cat /run_command
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + CMD=/usr/bin/neutron-sriov-nic-agent
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + ARGS=
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + sudo kolla_copy_cacerts
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + [[ ! -n '' ]]
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + . kolla_extend_start
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + umask 0022
Nov 23 04:37:43 localhost neutron_sriov_agent[255412]: + exec /usr/bin/neutron-sriov-nic-agent
Nov 23 04:37:44 localhost python3.9[255536]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:37:44 localhost systemd[1]: Stopping neutron_sriov_agent container...
Nov 23 04:37:44 localhost systemd[1]: tmp-crun.Iopw9m.mount: Deactivated successfully.
Nov 23 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:37:44 localhost systemd[1]: libpod-f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab.scope: Deactivated successfully.
Nov 23 04:37:44 localhost systemd[1]: libpod-f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab.scope: Consumed 1.088s CPU time.
Nov 23 04:37:44 localhost podman[255540]: 2025-11-23 09:37:44.691770713 +0000 UTC m=+0.085132187 container died f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab-userdata-shm.mount: Deactivated successfully.
Nov 23 04:37:44 localhost podman[255540]: 2025-11-23 09:37:44.732666507 +0000 UTC m=+0.126027931 container cleanup f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:37:44 localhost podman[255540]: neutron_sriov_agent
Nov 23 04:37:44 localhost podman[255573]: 2025-11-23 09:37:44.801046388 +0000 UTC m=+0.042601619 container cleanup f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:37:44 localhost podman[255573]: neutron_sriov_agent
Nov 23 04:37:44 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Nov 23 04:37:44 localhost systemd[1]: Stopped neutron_sriov_agent container.
Nov 23 04:37:44 localhost systemd[1]: Starting neutron_sriov_agent container...
Nov 23 04:37:44 localhost podman[255552]: 2025-11-23 09:37:44.866821812 +0000 UTC m=+0.169457682 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:37:44 localhost systemd[1]: Started libcrun container.
Nov 23 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f50f9bf7e95693f6c87863dbf41af734f6cdf0bf57acdc68e9c6c5e0b2f410c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 04:37:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f50f9bf7e95693f6c87863dbf41af734f6cdf0bf57acdc68e9c6c5e0b2f410c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:37:44 localhost podman[255587]: 2025-11-23 09:37:44.927848119 +0000 UTC m=+0.099214840 container init f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 04:37:44 localhost podman[255587]: 2025-11-23 09:37:44.936421896 +0000 UTC m=+0.107788617 container start f4bb5c96c482f37e9b55b911025cf9fbe15fe49e9282cc15b8c6cd9569266fab (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'fc94218381a94dd3dfb97d00693d076063f570269228ec68cd3d10fc76129186'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true)
Nov 23 04:37:44 localhost podman[255587]: neutron_sriov_agent
Nov 23 04:37:44 localhost neutron_sriov_agent[255613]: + sudo -E kolla_set_configs
Nov 23 04:37:44 localhost systemd[1]: Started neutron_sriov_agent container.
Nov 23 04:37:44 localhost podman[255552]: 2025-11-23 09:37:44.947244983 +0000 UTC m=+0.249880903 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:37:44 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Validating config file
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Copying service configuration files
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Writing out command to execute
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c8b37597a29e61f341a0e3f5416437aac1a5cd21cb3a407dd674c7a7a1ff41da
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: ++ cat /run_command
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + CMD=/usr/bin/neutron-sriov-nic-agent
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + ARGS=
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + sudo kolla_copy_cacerts
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + [[ ! -n '' ]]
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + . kolla_extend_start
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + umask 0022
Nov 23 04:37:45 localhost neutron_sriov_agent[255613]: + exec /usr/bin/neutron-sriov-nic-agent
Nov 23 04:37:45 localhost systemd[1]: session-57.scope: Deactivated successfully.
Nov 23 04:37:45 localhost systemd[1]: session-57.scope: Consumed 23.389s CPU time.
Nov 23 04:37:45 localhost systemd-logind[761]: Session 57 logged out. Waiting for processes to exit.
Nov 23 04:37:45 localhost systemd-logind[761]: Removed session 57.
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.612 2 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.612 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.612 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.612 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.613 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.613 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.613 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005532586.localdomain'}#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.613 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] RPC agent_id: nic-switch-agent.np0005532586.localdomain#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.618 2 INFO neutron.agent.agent_extensions_manager [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] Loaded agent extensions: ['qos']#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.618 2 INFO neutron.agent.agent_extensions_manager [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] Initializing agent extension 'qos'#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.966 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] Agent initialized successfully, now running... #033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.967 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m
Nov 23 04:37:46 localhost neutron_sriov_agent[255613]: 2025-11-23 09:37:46.967 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-d94cd0ac-5055-4c55-baec-8b9179aacb66 - - - - - -] Agent out of sync with plugin!#033[00m
Nov 23 04:37:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:37:47 localhost podman[255649]: 2025-11-23 09:37:47.173786492 +0000 UTC m=+0.076085087 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 23 04:37:47 localhost podman[255649]: 2025-11-23 09:37:47.185876313 +0000 UTC m=+0.088174928 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:37:47 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:37:51 localhost sshd[255669]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:37:51 localhost systemd-logind[761]: New session 58 of user zuul.
Nov 23 04:37:51 localhost systemd[1]: Started Session 58 of User zuul.
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: ERROR   09:37:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: ERROR   09:37:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: ERROR   09:37:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: ERROR   09:37:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: ERROR   09:37:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:37:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:37:52 localhost python3.9[255780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20304 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75980AA00000000001030307) 
Nov 23 04:37:53 localhost python3.9[255894]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:37:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20305 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75980E990000000001030307) 
Nov 23 04:37:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41647 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759810DA0000000001030307) 
Nov 23 04:37:55 localhost python3.9[255957]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:37:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20306 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7598169A0000000001030307) 
Nov 23 04:37:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31799 DF PROTO=TCP SPT=49398 DPT=9102 SEQ=2011471280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75981AD90000000001030307) 
Nov 23 04:37:59 localhost python3.9[256069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 23 04:38:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20307 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759826590000000001030307) 
Nov 23 04:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:38:01 localhost systemd[1]: tmp-crun.SHaqHl.mount: Deactivated successfully.
Nov 23 04:38:01 localhost podman[256183]: 2025-11-23 09:38:01.508523047 +0000 UTC m=+0.102288811 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:38:01 localhost podman[256183]: 2025-11-23 09:38:01.523070903 +0000 UTC m=+0.116836667 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:38:01 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:38:01 localhost python3.9[256182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:02 localhost python3.9[256310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:02 localhost python3.9[256420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:38:03 localhost podman[256454]: 2025-11-23 09:38:03.186153771 +0000 UTC m=+0.092043010 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Nov 23 04:38:03 localhost podman[256454]: 2025-11-23 09:38:03.20683185 +0000 UTC m=+0.112721069 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Nov 23 04:38:03 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:38:03 localhost podman[256455]: 2025-11-23 09:38:03.295431627 +0000 UTC m=+0.192502781 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:38:03 localhost podman[256455]: 2025-11-23 09:38:03.331888063 +0000 UTC m=+0.228959227 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:38:03 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:38:03 localhost python3.9[256571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:04 localhost python3.9[256681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:04 localhost nova_compute[230084]: 2025-11-23 09:38:04.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:04 localhost python3.9[256791]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:05 localhost python3.9[256901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:06 localhost python3.9[257011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:06 localhost nova_compute[230084]: 2025-11-23 09:38:06.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:06 localhost nova_compute[230084]: 2025-11-23 09:38:06.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:07 localhost python3.9[257099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890685.7893944-280-203499228673599/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:07 localhost nova_compute[230084]: 2025-11-23 09:38:07.543 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:07 localhost python3.9[257207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:08 localhost python3.9[257293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890687.3292632-326-106490778567297/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:08 localhost nova_compute[230084]: 2025-11-23 09:38:08.545 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20308 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759846D90000000001030307) 
Nov 23 04:38:08 localhost python3.9[257401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:09.237 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:38:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:09.238 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:38:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:09.238 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:38:09 localhost python3.9[257487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890688.451534-326-229882040076593/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.565 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.565 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.566 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:09 localhost nova_compute[230084]: 2025-11-23 09:38:09.566 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:38:10 localhost python3.9[257595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:38:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:38:10 localhost podman[257616]: 2025-11-23 09:38:10.167512625 +0000 UTC m=+0.075113370 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:38:10 localhost podman[257616]: 2025-11-23 09:38:10.17788788 +0000 UTC m=+0.085488665 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:38:10 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:38:10 localhost podman[257613]: 2025-11-23 09:38:10.236708129 +0000 UTC m=+0.143558395 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:38:10 localhost podman[257613]: 2025-11-23 09:38:10.242510603 +0000 UTC m=+0.149360829 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 04:38:10 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:38:10 localhost python3.9[257758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890689.5417964-326-181499260352200/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=08d63fce15a0acc8ec9ad67311b58d49a5e13b46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:10 localhost nova_compute[230084]: 2025-11-23 09:38:10.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:11 localhost podman[257833]: 2025-11-23 09:38:11.124983977 +0000 UTC m=+0.091948488 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, ceph=True, release=553, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7)
Nov 23 04:38:11 localhost podman[257833]: 2025-11-23 09:38:11.225706266 +0000 UTC m=+0.192670757 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7)
Nov 23 04:38:11 localhost podman[240144]: time="2025-11-23T09:38:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:38:11 localhost podman[240144]: @ - - [23/Nov/2025:09:38:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144033 "" "Go-http-client/1.1"
Nov 23 04:38:11 localhost podman[240144]: @ - - [23/Nov/2025:09:38:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16305 "" "Go-http-client/1.1"
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.563 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.564 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.564 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.565 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:38:11 localhost nova_compute[230084]: 2025-11-23 09:38:11.565 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.021 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.244 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.245 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12934MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.246 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.246 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.314 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.314 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.334 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:38:12 localhost python3.9[258112]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.772 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.778 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.801 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.803 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:38:12 localhost nova_compute[230084]: 2025-11-23 09:38:12.804 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:38:13 localhost python3.9[258218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890692.3405423-499-96326936756387/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=1067e04911e84d9dc262158a63dd8e464b0e5dfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:14 localhost python3.9[258326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:14 localhost python3.9[258412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890693.5313027-545-278148101014579/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:38:15 localhost systemd[1]: tmp-crun.NibQpP.mount: Deactivated successfully.
Nov 23 04:38:15 localhost podman[258435]: 2025-11-23 09:38:15.202619198 +0000 UTC m=+0.104332486 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Nov 23 04:38:15 localhost podman[258435]: 2025-11-23 09:38:15.245915345 +0000 UTC m=+0.147628643 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:38:15 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:38:15 localhost python3.9[258546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:16 localhost python3.9[258632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890695.1162603-545-244825842283566/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:17 localhost python3.9[258740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:38:17 localhost podman[258741]: 2025-11-23 09:38:17.348291944 +0000 UTC m=+0.091246579 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:38:17 localhost podman[258741]: 2025-11-23 09:38:17.362971533 +0000 UTC m=+0.105926158 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 23 04:38:17 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:38:17 localhost python3.9[258813]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:18 localhost python3.9[258921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:18 localhost sshd[258933]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:38:18 localhost python3.9[259008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890697.8419883-631-64506182863976/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:19 localhost python3.9[259117]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:38:20 localhost python3.9[259229]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:20 localhost python3.9[259339]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:21 localhost python3.9[259396]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:22 localhost python3.9[259506]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: ERROR   09:38:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: ERROR   09:38:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: ERROR   09:38:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: ERROR   09:38:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: ERROR   09:38:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:38:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:38:22 localhost python3.9[259563]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:23 localhost python3.9[259673]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53340 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75987FD00000000001030307) 
Nov 23 04:38:23 localhost sshd[259783]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:38:23 localhost python3.9[259784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53341 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759883DA0000000001030307) 
Nov 23 04:38:24 localhost python3.9[259842]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20309 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759886D90000000001030307) 
Nov 23 04:38:25 localhost python3.9[259952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53342 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75988BDA0000000001030307) 
Nov 23 04:38:26 localhost python3.9[260009]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41648 DF PROTO=TCP SPT=41064 DPT=9102 SEQ=577627338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75988EDA0000000001030307) 
Nov 23 04:38:27 localhost python3.9[260119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:38:27 localhost systemd[1]: Reloading.
Nov 23 04:38:27 localhost systemd-sysv-generator[260146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:38:27 localhost systemd-rc-local-generator[260142]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:28 localhost python3.9[260267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:28 localhost python3.9[260324]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:29 localhost python3.9[260434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:29 localhost python3.9[260491]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53343 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75989B9A0000000001030307) 
Nov 23 04:38:30 localhost python3.9[260601]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:38:30 localhost systemd[1]: Reloading.
Nov 23 04:38:30 localhost systemd-sysv-generator[260630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:38:30 localhost systemd-rc-local-generator[260626]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:30 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:38:30 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:38:30 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:38:30 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:38:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:38:31 localhost podman[260752]: 2025-11-23 09:38:31.841778366 +0000 UTC m=+0.091357341 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Nov 23 04:38:31 localhost podman[260752]: 2025-11-23 09:38:31.851566996 +0000 UTC m=+0.101145991 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 23 04:38:31 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:38:31 localhost python3.9[260753]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:38:32 localhost python3.9[260882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:38:33 localhost python3.9[260970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763890712.234626-1075-165186162497735/.source.json _original_basename=.amd6dsfm follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:38:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:38:33 localhost podman[261081]: 2025-11-23 09:38:33.906516378 +0000 UTC m=+0.079833816 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 23 04:38:33 localhost podman[261081]: 2025-11-23 09:38:33.918938318 +0000 UTC m=+0.092255666 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm)
Nov 23 04:38:33 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:38:33 localhost podman[261082]: 2025-11-23 09:38:33.966399165 +0000 UTC m=+0.136052046 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:38:33 localhost podman[261082]: 2025-11-23 09:38:33.974474869 +0000 UTC m=+0.144127780 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:38:33 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:38:34 localhost python3.9[261080]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:36 localhost python3.9[261429]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Nov 23 04:38:37 localhost python3.9[261539]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:38:38 localhost python3.9[261649]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:38:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53344 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7598BCDA0000000001030307) 
Nov 23 04:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:38:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:38:41 localhost systemd[1]: tmp-crun.LTnOQm.mount: Deactivated successfully.
Nov 23 04:38:41 localhost podman[261695]: 2025-11-23 09:38:41.198124912 +0000 UTC m=+0.101253594 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:38:41 localhost podman[261694]: 2025-11-23 09:38:41.160950857 +0000 UTC m=+0.068651740 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 04:38:41 localhost podman[261695]: 2025-11-23 09:38:41.230856129 +0000 UTC m=+0.133984781 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:38:41 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:38:41 localhost podman[261694]: 2025-11-23 09:38:41.245345783 +0000 UTC m=+0.153046656 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:38:41 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:38:41 localhost podman[240144]: time="2025-11-23T09:38:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:38:41 localhost podman[240144]: @ - - [23/Nov/2025:09:38:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144033 "" "Go-http-client/1.1"
Nov 23 04:38:41 localhost podman[240144]: @ - - [23/Nov/2025:09:38:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16298 "" "Go-http-client/1.1"
Nov 23 04:38:43 localhost python3[261828]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:38:43 localhost podman[261865]: 
Nov 23 04:38:43 localhost podman[261865]: 2025-11-23 09:38:43.407727052 +0000 UTC m=+0.085165197 container create ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 04:38:43 localhost podman[261865]: 2025-11-23 09:38:43.364802135 +0000 UTC m=+0.042240280 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:38:43 localhost python3[261828]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:38:44 localhost python3.9[262013]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:38:45 localhost python3.9[262125]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:38:45 localhost systemd[1]: tmp-crun.fextJu.mount: Deactivated successfully.
Nov 23 04:38:45 localhost podman[262181]: 2025-11-23 09:38:45.443477776 +0000 UTC m=+0.084438228 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:38:45 localhost podman[262181]: 2025-11-23 09:38:45.507173593 +0000 UTC m=+0.148134075 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 04:38:45 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:38:45 localhost python3.9[262180]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:38:46 localhost python3.9[262314]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890725.6183686-1339-10252148220250/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:38:46 localhost python3.9[262369]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:38:46 localhost systemd[1]: Reloading.
Nov 23 04:38:46 localhost systemd-sysv-generator[262398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:38:46 localhost systemd-rc-local-generator[262395]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:38:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:38:47 localhost systemd[1]: tmp-crun.5A3qwF.mount: Deactivated successfully.
Nov 23 04:38:47 localhost podman[262461]: 2025-11-23 09:38:47.604410467 +0000 UTC m=+0.094386313 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:38:47 localhost podman[262461]: 2025-11-23 09:38:47.620933865 +0000 UTC m=+0.110909721 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 04:38:47 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:38:47 localhost python3.9[262460]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:38:47 localhost systemd[1]: Reloading.
Nov 23 04:38:47 localhost systemd-rc-local-generator[262504]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:38:47 localhost systemd-sysv-generator[262507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:38:48 localhost systemd[1]: Starting neutron_dhcp_agent container...
Nov 23 04:38:48 localhost systemd[1]: Started libcrun container.
Nov 23 04:38:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e915e4cf3434529e5475e3733225e96c47ddca1a9a3fb5f111e5c3b3ac9c1638/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 04:38:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e915e4cf3434529e5475e3733225e96c47ddca1a9a3fb5f111e5c3b3ac9c1638/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:38:48 localhost sshd[262536]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:38:48 localhost podman[262519]: 2025-11-23 09:38:48.371923945 +0000 UTC m=+0.135881812 container init ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, org.label-schema.schema-version=1.0)
Nov 23 04:38:48 localhost podman[262519]: 2025-11-23 09:38:48.383009288 +0000 UTC m=+0.146967155 container start ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3)
Nov 23 04:38:48 localhost podman[262519]: neutron_dhcp_agent
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + sudo -E kolla_set_configs
Nov 23 04:38:48 localhost systemd[1]: Started neutron_dhcp_agent container.
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Validating config file
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Copying service configuration files
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Writing out command to execute
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c8b37597a29e61f341a0e3f5416437aac1a5cd21cb3a407dd674c7a7a1ff41da
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: ++ cat /run_command
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + CMD=/usr/bin/neutron-dhcp-agent
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + ARGS=
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + sudo kolla_copy_cacerts
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + [[ ! -n '' ]]
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + . kolla_extend_start
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: Running command: '/usr/bin/neutron-dhcp-agent'
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + umask 0022
Nov 23 04:38:48 localhost neutron_dhcp_agent[262533]: + exec /usr/bin/neutron-dhcp-agent
Nov 23 04:38:49 localhost neutron_dhcp_agent[262533]: 2025-11-23 09:38:49.658 262539 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 04:38:49 localhost neutron_dhcp_agent[262533]: 2025-11-23 09:38:49.658 262539 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m
Nov 23 04:38:50 localhost neutron_dhcp_agent[262533]: 2025-11-23 09:38:50.029 262539 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m
Nov 23 04:38:50 localhost python3.9[262659]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:38:50 localhost systemd[1]: Stopping neutron_dhcp_agent container...
Nov 23 04:38:50 localhost systemd[1]: libpod-ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a.scope: Deactivated successfully.
Nov 23 04:38:50 localhost systemd[1]: libpod-ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a.scope: Consumed 1.981s CPU time.
Nov 23 04:38:50 localhost podman[262664]: 2025-11-23 09:38:50.635958758 +0000 UTC m=+0.393506048 container died ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 04:38:50 localhost systemd[1]: tmp-crun.Up67cO.mount: Deactivated successfully.
Nov 23 04:38:50 localhost podman[262664]: 2025-11-23 09:38:50.694680554 +0000 UTC m=+0.452227824 container cleanup ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_dhcp)
Nov 23 04:38:50 localhost podman[262664]: neutron_dhcp_agent
Nov 23 04:38:50 localhost podman[262700]: error opening file `/run/crun/ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a/status`: No such file or directory
Nov 23 04:38:50 localhost podman[262688]: 2025-11-23 09:38:50.789603119 +0000 UTC m=+0.066191235 container cleanup ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']})
Nov 23 04:38:50 localhost podman[262688]: neutron_dhcp_agent
Nov 23 04:38:50 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Nov 23 04:38:50 localhost systemd[1]: Stopped neutron_dhcp_agent container.
Nov 23 04:38:50 localhost systemd[1]: Starting neutron_dhcp_agent container...
Nov 23 04:38:50 localhost systemd[1]: Started libcrun container.
Nov 23 04:38:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e915e4cf3434529e5475e3733225e96c47ddca1a9a3fb5f111e5c3b3ac9c1638/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 23 04:38:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e915e4cf3434529e5475e3733225e96c47ddca1a9a3fb5f111e5c3b3ac9c1638/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:38:50 localhost podman[262702]: 2025-11-23 09:38:50.938053583 +0000 UTC m=+0.113274442 container init ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.license=GPLv2)
Nov 23 04:38:50 localhost podman[262702]: 2025-11-23 09:38:50.946775084 +0000 UTC m=+0.121995943 container start ee1498964ef9b063af230ece07b23dff20b74decc12de5b9beecc3333c59388a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '78d32cc5948c0586450206cdb58b541f2dca961c8667d775905273838ad2d545'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 23 04:38:50 localhost podman[262702]: neutron_dhcp_agent
Nov 23 04:38:50 localhost neutron_dhcp_agent[262717]: + sudo -E kolla_set_configs
Nov 23 04:38:50 localhost systemd[1]: Started neutron_dhcp_agent container.
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Validating config file
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Copying service configuration files
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Writing out command to execute
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/c8b37597a29e61f341a0e3f5416437aac1a5cd21cb3a407dd674c7a7a1ff41da
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: ++ cat /run_command
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + CMD=/usr/bin/neutron-dhcp-agent
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + ARGS=
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + sudo kolla_copy_cacerts
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + [[ ! -n '' ]]
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + . kolla_extend_start
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: Running command: '/usr/bin/neutron-dhcp-agent'
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + umask 0022
Nov 23 04:38:51 localhost neutron_dhcp_agent[262717]: + exec /usr/bin/neutron-dhcp-agent
Nov 23 04:38:51 localhost systemd[1]: session-58.scope: Deactivated successfully.
Nov 23 04:38:51 localhost systemd[1]: session-58.scope: Consumed 34.614s CPU time.
Nov 23 04:38:51 localhost systemd-logind[761]: Session 58 logged out. Waiting for processes to exit.
Nov 23 04:38:51 localhost systemd-logind[761]: Removed session 58.
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.209 262721 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.210 262721 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: ERROR   09:38:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: ERROR   09:38:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: ERROR   09:38:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: ERROR   09:38:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: ERROR   09:38:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:38:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.656 262721 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.917 262721 INFO neutron.agent.dhcp.agent [None req-25b7f0fc-7ee8-4ef1-9943-bff75a0f8d1d - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.918 262721 INFO neutron.agent.dhcp.agent [-] Starting network 4888f017-3f3f-45ef-b058-53b634233093 dhcp configuration#033[00m
Nov 23 04:38:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:52.969 262721 INFO neutron.agent.dhcp.agent [-] Starting network bcac49fc-c589-475a-91a8-00a0ba9c2b33 dhcp configuration#033[00m
Nov 23 04:38:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42827 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7598F5040000000001030307) 
Nov 23 04:38:53 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:53.650 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:38:53 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:53.652 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:38:53 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:53.652 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.090 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpuqfwym_r/privsep.sock']#033[00m
Nov 23 04:38:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42828 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7598F9190000000001030307) 
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.683 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.586 262754 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.591 262754 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.594 262754 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.595 262754 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262754#033[00m
Nov 23 04:38:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:54.689 262721 WARNING oslo_privsep.priv_context [None req-4f74dea8-3945-4cee-8884-28656c7a6880 - - - - - -] privsep daemon already running#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.269 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpshn5oi6a/privsep.sock']#033[00m
Nov 23 04:38:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53345 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7598FCD90000000001030307) 
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.888 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.781 262764 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.784 262764 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.787 262764 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.787 262764 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262764#033[00m
Nov 23 04:38:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:55.893 262721 WARNING oslo_privsep.priv_context [None req-4f74dea8-3945-4cee-8884-28656c7a6880 - - - - - -] privsep daemon already running#033[00m
Nov 23 04:38:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42829 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759901190000000001030307) 
Nov 23 04:38:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:56.817 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpjjyat5xg/privsep.sock']#033[00m
Nov 23 04:38:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20310 DF PROTO=TCP SPT=40558 DPT=9102 SEQ=3474784123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759904D90000000001030307) 
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.378 262721 INFO oslo.privsep.daemon [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.281 262780 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.286 262780 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.290 262780 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.291 262780 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262780#033[00m
Nov 23 04:38:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:57.382 262721 WARNING oslo_privsep.priv_context [None req-4f74dea8-3945-4cee-8884-28656c7a6880 - - - - - -] privsep daemon already running#033[00m
Nov 23 04:38:58 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:58.700 262721 INFO neutron.agent.linux.ip_lib [None req-3394aace-45dc-447a-9b00-0ab5b357b07e - - - - - -] Device tap5da37c9f-8b cannot be used as it has no MAC address#033[00m
Nov 23 04:38:58 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:38:58.701 262721 INFO neutron.agent.linux.ip_lib [None req-4f74dea8-3945-4cee-8884-28656c7a6880 - - - - - -] Device tap5836fd2a-7b cannot be used as it has no MAC address#033[00m
Nov 23 04:38:58 localhost kernel: device tap5da37c9f-8b entered promiscuous mode
Nov 23 04:38:58 localhost NetworkManager[5990]: <info>  [1763890738.7717] manager: (tap5da37c9f-8b): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00025|binding|INFO|Claiming lport 5da37c9f-8b39-4d6a-87a9-86fa97a64e12 for this chassis.
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00026|binding|INFO|5da37c9f-8b39-4d6a-87a9-86fa97a64e12: Claiming unknown
Nov 23 04:38:58 localhost systemd-udevd[262804]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:38:58 localhost kernel: device tap5836fd2a-7b entered promiscuous mode
Nov 23 04:38:58 localhost NetworkManager[5990]: <info>  [1763890738.7840] manager: (tap5836fd2a-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Nov 23 04:38:58 localhost systemd-udevd[262807]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00027|if_status|INFO|Not updating pb chassis for 5836fd2a-7ba0-417c-b0e1-91c14dd29120 now as sb is readonly
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-b7d5b3-0
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-c237ed-0
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-49b8a0-0
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.799 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-bcac49fc-c589-475a-91a8-00a0ba9c2b33', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcac49fc-c589-475a-91a8-00a0ba9c2b33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1915d3e5d4254231a0517e2dcf35848f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ca41f3-3e94-4959-b1b5-1e81bd2c9bc1, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5da37c9f-8b39-4d6a-87a9-86fa97a64e12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.801 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5da37c9f-8b39-4d6a-87a9-86fa97a64e12 in datapath bcac49fc-c589-475a-91a8-00a0ba9c2b33 bound to our chassis#033[00m
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.806 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7815adda-9032-4bb8-a73f-77a5fc7e4640 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.806 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bcac49fc-c589-475a-91a8-00a0ba9c2b33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.807 159429 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpcx98lox7/privsep.sock']#033[00m
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00031|binding|INFO|Claiming lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 for this chassis.
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00032|binding|INFO|5836fd2a-7ba0-417c-b0e1-91c14dd29120: Claiming unknown
Nov 23 04:38:58 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:58.874 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1915d3e5d4254231a0517e2dcf35848f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11af2473-7670-43cb-8698-dcf3af8d28c8, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5836fd2a-7ba0-417c-b0e1-91c14dd29120) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00033|binding|INFO|Setting lport 5da37c9f-8b39-4d6a-87a9-86fa97a64e12 ovn-installed in OVS
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00034|binding|INFO|Setting lport 5da37c9f-8b39-4d6a-87a9-86fa97a64e12 up in Southbound
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00035|binding|INFO|Setting lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 ovn-installed in OVS
Nov 23 04:38:58 localhost ovn_controller[153786]: 2025-11-23T09:38:58Z|00036|binding|INFO|Setting lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 up in Southbound
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.420 159429 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.420 159429 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpcx98lox7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.310 262865 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.316 262865 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.320 262865 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.320 262865 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262865#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.424 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[74bc2a2e-80b8-44bd-9dc9-5da3138ce34a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.872 262865 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.872 262865 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:38:59 localhost ovn_metadata_agent[159423]: 2025-11-23 09:38:59.873 262865 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:39:00 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:00.014 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[981a603f-9e94-42dc-a9f5-29658e757433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:39:00 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:00.015 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5836fd2a-7ba0-417c-b0e1-91c14dd29120 in datapath 4888f017-3f3f-45ef-b058-53b634233093 unbound from our chassis#033[00m
Nov 23 04:39:00 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:00.019 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9f87db19-0599-462f-b8c0-280fa85e1e72 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 04:39:00 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:00.019 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4888f017-3f3f-45ef-b058-53b634233093, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:39:00 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:00.020 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[6952a3a0-f8a0-40c1-8b05-f13b3d559e87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:39:00 localhost podman[262915]: 
Nov 23 04:39:00 localhost podman[262915]: 2025-11-23 09:39:00.124432016 +0000 UTC m=+0.072074810 container create a07e5b7c536ec489ffa211ec759d35773c50ef2791607d9d60b8929525be8c52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bcac49fc-c589-475a-91a8-00a0ba9c2b33, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:39:00 localhost systemd[1]: Started libpod-conmon-a07e5b7c536ec489ffa211ec759d35773c50ef2791607d9d60b8929525be8c52.scope.
Nov 23 04:39:00 localhost systemd[1]: tmp-crun.TQfQ1L.mount: Deactivated successfully.
Nov 23 04:39:00 localhost podman[262915]: 2025-11-23 09:39:00.084516369 +0000 UTC m=+0.032159233 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:39:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9342bd011b8e115e0cd6bbf14949558ebf0481caecaa1f66407f0fcfb773cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:39:00 localhost podman[262915]: 2025-11-23 09:39:00.204377005 +0000 UTC m=+0.152019799 container init a07e5b7c536ec489ffa211ec759d35773c50ef2791607d9d60b8929525be8c52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bcac49fc-c589-475a-91a8-00a0ba9c2b33, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:39:00 localhost podman[262915]: 2025-11-23 09:39:00.211654508 +0000 UTC m=+0.159297302 container start a07e5b7c536ec489ffa211ec759d35773c50ef2791607d9d60b8929525be8c52 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bcac49fc-c589-475a-91a8-00a0ba9c2b33, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:39:00 localhost dnsmasq[262955]: started, version 2.85 cachesize 150
Nov 23 04:39:00 localhost dnsmasq[262955]: DNS service limited to local subnets
Nov 23 04:39:00 localhost dnsmasq[262955]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:39:00 localhost dnsmasq[262955]: warning: no upstream servers configured
Nov 23 04:39:00 localhost dnsmasq-dhcp[262955]: DHCP, static leases only on 192.168.0.0, lease time 1d
Nov 23 04:39:00 localhost dnsmasq[262955]: read /var/lib/neutron/dhcp/bcac49fc-c589-475a-91a8-00a0ba9c2b33/addn_hosts - 2 addresses
Nov 23 04:39:00 localhost dnsmasq-dhcp[262955]: read /var/lib/neutron/dhcp/bcac49fc-c589-475a-91a8-00a0ba9c2b33/host
Nov 23 04:39:00 localhost dnsmasq-dhcp[262955]: read /var/lib/neutron/dhcp/bcac49fc-c589-475a-91a8-00a0ba9c2b33/opts
Nov 23 04:39:00 localhost podman[262937]: 
Nov 23 04:39:00 localhost podman[262937]: 2025-11-23 09:39:00.241371985 +0000 UTC m=+0.094483215 container create f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.273 262721 INFO neutron.agent.dhcp.agent [None req-4ab877f5-a68b-4a2d-a3f6-0782629d4fb9 - - - - - -] Finished network bcac49fc-c589-475a-91a8-00a0ba9c2b33 dhcp configuration#033[00m
Nov 23 04:39:00 localhost systemd[1]: Started libpod-conmon-f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6.scope.
Nov 23 04:39:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:39:00 localhost podman[262937]: 2025-11-23 09:39:00.193777934 +0000 UTC m=+0.046889234 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:39:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aaedf84d18e8f5cac94b0f6d56ff43a8182966c832c7206a8b41fc2cc58f4d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:39:00 localhost podman[262937]: 2025-11-23 09:39:00.302179107 +0000 UTC m=+0.155290367 container init f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:39:00 localhost podman[262937]: 2025-11-23 09:39:00.308690819 +0000 UTC m=+0.161802079 container start f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 04:39:00 localhost dnsmasq[262961]: started, version 2.85 cachesize 150
Nov 23 04:39:00 localhost dnsmasq[262961]: DNS service limited to local subnets
Nov 23 04:39:00 localhost dnsmasq[262961]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:39:00 localhost dnsmasq[262961]: warning: no upstream servers configured
Nov 23 04:39:00 localhost dnsmasq-dhcp[262961]: DHCP, static leases only on 192.168.122.0, lease time 1d
Nov 23 04:39:00 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 04:39:00 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:39:00 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.362 262721 INFO neutron.agent.dhcp.agent [None req-28aaa2ef-e0a2-4c07-9368-2e555c62962d - - - - - -] Finished network 4888f017-3f3f-45ef-b058-53b634233093 dhcp configuration#033[00m
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.362 262721 INFO neutron.agent.dhcp.agent [None req-25b7f0fc-7ee8-4ef1-9943-bff75a0f8d1d - - - - - -] Synchronizing state complete#033[00m
Nov 23 04:39:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42830 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759910D90000000001030307) 
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.407 262721 INFO neutron.agent.dhcp.agent [None req-25b7f0fc-7ee8-4ef1-9943-bff75a0f8d1d - - - - - -] DHCP agent started#033[00m
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.511 262721 INFO neutron.agent.dhcp.agent [None req-932b2f3e-9526-4e9e-84d4-543d0b19ba34 - - - - - -] DHCP configuration for ports {'98ef2da5-f5cb-44e8-a4b2-f6178c6c8332', 'd3912d14-a3e0-4df9-b811-f3bd90f44559', '61c3623c-5020-45a1-ac7f-a7f97707758b'} is completed#033[00m
Nov 23 04:39:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:39:00.785 262721 INFO neutron.agent.dhcp.agent [None req-0f1e38f4-cd93-475f-aa05-bf409fba1db3 - - - - - -] DHCP configuration for ports {'98ef2da5-f5cb-44e8-a4b2-f6178c6c8332', '61c3623c-5020-45a1-ac7f-a7f97707758b', 'e0005f93-e3d4-4607-a8bd-d715c3013354', 'cb90d712-4442-4d65-b8ed-0e95bb9a7fdd', 'd3912d14-a3e0-4df9-b811-f3bd90f44559', '796046a4-2720-44da-bd08-f50f7bf76530'} is completed#033[00m
Nov 23 04:39:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:39:02 localhost podman[262962]: 2025-11-23 09:39:02.177811428 +0000 UTC m=+0.084877330 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_id=edpm)
Nov 23 04:39:02 localhost podman[262962]: 2025-11-23 09:39:02.189236551 +0000 UTC m=+0.096302403 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm)
Nov 23 04:39:02 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:39:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:39:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:39:04 localhost podman[262982]: 2025-11-23 09:39:04.158557094 +0000 UTC m=+0.065516216 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:39:04 localhost podman[262981]: 2025-11-23 09:39:04.222842398 +0000 UTC m=+0.129775100 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 04:39:04 localhost podman[262981]: 2025-11-23 09:39:04.238229376 +0000 UTC m=+0.145162118 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public)
Nov 23 04:39:04 localhost podman[262982]: 2025-11-23 09:39:04.247776018 +0000 UTC m=+0.154735090 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:39:04 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:39:04 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:39:07 localhost nova_compute[230084]: 2025-11-23 09:39:07.801 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:08 localhost nova_compute[230084]: 2025-11-23 09:39:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:08 localhost nova_compute[230084]: 2025-11-23 09:39:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42831 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759930DA0000000001030307) 
Nov 23 04:39:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:09.239 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:39:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:09.239 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:39:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:39:09.239 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:39:09 localhost nova_compute[230084]: 2025-11-23 09:39:09.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:39:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:39:10 localhost nova_compute[230084]: 2025-11-23 09:39:10.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:10 localhost nova_compute[230084]: 2025-11-23 09:39:10.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:10 localhost nova_compute[230084]: 2025-11-23 09:39:10.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:39:11 localhost podman[240144]: time="2025-11-23T09:39:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:39:11 localhost podman[240144]: @ - - [23/Nov/2025:09:39:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:39:11 localhost podman[240144]: @ - - [23/Nov/2025:09:39:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17679 "" "Go-http-client/1.1"
Nov 23 04:39:11 localhost nova_compute[230084]: 2025-11-23 09:39:11.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:11 localhost nova_compute[230084]: 2025-11-23 09:39:11.548 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:39:11 localhost nova_compute[230084]: 2025-11-23 09:39:11.548 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:39:11 localhost nova_compute[230084]: 2025-11-23 09:39:11.582 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:39:11 localhost nova_compute[230084]: 2025-11-23 09:39:11.583 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:39:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:39:12 localhost podman[263027]: 2025-11-23 09:39:12.171446103 +0000 UTC m=+0.082660822 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:39:12 localhost podman[263027]: 2025-11-23 09:39:12.176017323 +0000 UTC m=+0.087232022 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:39:12 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:39:12 localhost podman[263028]: 2025-11-23 09:39:12.222716671 +0000 UTC m=+0.129518023 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:39:12 localhost podman[263028]: 2025-11-23 09:39:12.235981913 +0000 UTC m=+0.142783305 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:39:12 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.573 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.574 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.574 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.574 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:39:12 localhost nova_compute[230084]: 2025-11-23 09:39:12.575 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.107 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.318 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.320 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12493MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.321 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.321 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.383 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.384 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.406 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.868 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.874 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.902 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.905 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:39:13 localhost nova_compute[230084]: 2025-11-23 09:39:13.905 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:39:16 localhost podman[263195]: 2025-11-23 09:39:16.174677071 +0000 UTC m=+0.082219450 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 23 04:39:16 localhost podman[263195]: 2025-11-23 09:39:16.215160583 +0000 UTC m=+0.122702982 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:39:16 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:39:18 localhost podman[263221]: 2025-11-23 09:39:18.184829227 +0000 UTC m=+0.088490526 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:39:18 localhost podman[263221]: 2025-11-23 09:39:18.195932681 +0000 UTC m=+0.099593960 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 23 04:39:18 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:39:20 localhost sshd[263240]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: ERROR   09:39:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: ERROR   09:39:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: ERROR   09:39:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: ERROR   09:39:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: ERROR   09:39:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:39:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:39:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51443 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75996A300000000001030307) 
Nov 23 04:39:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51444 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75996E190000000001030307) 
Nov 23 04:39:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42832 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759970DA0000000001030307) 
Nov 23 04:39:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51445 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759976190000000001030307) 
Nov 23 04:39:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53346 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1757326687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A75997AD90000000001030307) 
Nov 23 04:39:28 localhost ovn_controller[153786]: 2025-11-23T09:39:28Z|00037|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 23 04:39:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51446 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759985D90000000001030307) 
Nov 23 04:39:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:39:33 localhost podman[263242]: 2025-11-23 09:39:33.175702547 +0000 UTC m=+0.082651472 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:39:33 localhost podman[263242]: 2025-11-23 09:39:33.212688557 +0000 UTC m=+0.119637482 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118)
Nov 23 04:39:33 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:39:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:39:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:39:35 localhost podman[263259]: 2025-11-23 09:39:35.165446081 +0000 UTC m=+0.076020785 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:39:35 localhost podman[263260]: 2025-11-23 09:39:35.234139291 +0000 UTC m=+0.136620210 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:39:35 localhost podman[263260]: 2025-11-23 09:39:35.245786309 +0000 UTC m=+0.148267178 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:39:35 localhost podman[263259]: 2025-11-23 09:39:35.254297066 +0000 UTC m=+0.164871740 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 04:39:35 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:39:35 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:39:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51447 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599A6DA0000000001030307) 
Nov 23 04:39:41 localhost podman[240144]: time="2025-11-23T09:39:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:39:41 localhost podman[240144]: @ - - [23/Nov/2025:09:39:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:39:41 localhost podman[240144]: @ - - [23/Nov/2025:09:39:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1"
Nov 23 04:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:39:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:39:43 localhost podman[263304]: 2025-11-23 09:39:43.172267599 +0000 UTC m=+0.078090050 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:39:43 localhost podman[263304]: 2025-11-23 09:39:43.181954645 +0000 UTC m=+0.087777136 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:39:43 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:39:43 localhost systemd[1]: tmp-crun.ob1vFX.mount: Deactivated successfully.
Nov 23 04:39:43 localhost podman[263305]: 2025-11-23 09:39:43.231010836 +0000 UTC m=+0.132561844 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:39:43 localhost podman[263305]: 2025-11-23 09:39:43.241898964 +0000 UTC m=+0.143449962 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:39:43 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:39:47 localhost podman[263345]: 2025-11-23 09:39:47.176383751 +0000 UTC m=+0.079508228 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 23 04:39:47 localhost podman[263345]: 2025-11-23 09:39:47.214102651 +0000 UTC m=+0.117227088 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 04:39:47 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:39:49 localhost podman[263370]: 2025-11-23 09:39:49.170078276 +0000 UTC m=+0.072926202 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:39:49 localhost podman[263370]: 2025-11-23 09:39:49.210072366 +0000 UTC m=+0.112920292 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 04:39:49 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:39:49 localhost sshd[263389]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:39:49 localhost systemd-logind[761]: New session 59 of user zuul.
Nov 23 04:39:49 localhost systemd[1]: Started Session 59 of User zuul.
Nov 23 04:39:50 localhost sshd[263409]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:39:51 localhost python3.9[263502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: ERROR   09:39:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: ERROR   09:39:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: ERROR   09:39:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: ERROR   09:39:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: ERROR   09:39:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:39:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:39:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3192 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599DF600000000001030307) 
Nov 23 04:39:53 localhost python3.9[263615]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:39:53 localhost network[263632]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:39:53 localhost network[263633]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:39:53 localhost network[263634]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:39:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3193 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599E35A0000000001030307) 
Nov 23 04:39:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51448 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599E6D90000000001030307) 
Nov 23 04:39:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3194 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599EB590000000001030307) 
Nov 23 04:39:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:39:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42833 DF PROTO=TCP SPT=45398 DPT=9102 SEQ=4067242079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599EED90000000001030307) 
Nov 23 04:40:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3195 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A7599FB190000000001030307) 
Nov 23 04:40:00 localhost python3.9[263868]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 23 04:40:01 localhost python3.9[263931]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:40:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:40:04 localhost podman[263934]: 2025-11-23 09:40:04.207303478 +0000 UTC m=+0.115253604 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:40:04 localhost podman[263934]: 2025-11-23 09:40:04.238259357 +0000 UTC m=+0.146209473 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:40:04 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:40:04 localhost nova_compute[230084]: 2025-11-23 09:40:04.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:04 localhost nova_compute[230084]: 2025-11-23 09:40:04.548 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 04:40:04 localhost nova_compute[230084]: 2025-11-23 09:40:04.722 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 04:40:04 localhost nova_compute[230084]: 2025-11-23 09:40:04.723 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:04 localhost nova_compute[230084]: 2025-11-23 09:40:04.723 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 04:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:40:06 localhost podman[264026]: 2025-11-23 09:40:06.19560317 +0000 UTC m=+0.094716556 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:40:06 localhost podman[264026]: 2025-11-23 09:40:06.204894908 +0000 UTC m=+0.104008264 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:40:06 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:40:06 localhost systemd[1]: tmp-crun.UfdStO.mount: Deactivated successfully.
Nov 23 04:40:06 localhost podman[264025]: 2025-11-23 09:40:06.300615809 +0000 UTC m=+0.200816825 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64)
Nov 23 04:40:06 localhost podman[264025]: 2025-11-23 09:40:06.34289134 +0000 UTC m=+0.243092386 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:40:06 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:40:06 localhost python3.9[264097]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:40:06 localhost nova_compute[230084]: 2025-11-23 09:40:06.765 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:07 localhost python3.9[264214]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:40:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3196 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A1ADA0000000001030307) 
Nov 23 04:40:08 localhost nova_compute[230084]: 2025-11-23 09:40:08.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:08 localhost python3.9[264325]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:40:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:40:09.243 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:40:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:40:09.244 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:40:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:40:09.244 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:40:09 localhost nova_compute[230084]: 2025-11-23 09:40:09.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:09 localhost nova_compute[230084]: 2025-11-23 09:40:09.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:10 localhost python3.9[264437]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:10 localhost nova_compute[230084]: 2025-11-23 09:40:10.559 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:10 localhost nova_compute[230084]: 2025-11-23 09:40:10.587 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:10 localhost nova_compute[230084]: 2025-11-23 09:40:10.587 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:10 localhost nova_compute[230084]: 2025-11-23 09:40:10.588 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:40:11 localhost podman[240144]: time="2025-11-23T09:40:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:40:11 localhost podman[240144]: @ - - [23/Nov/2025:09:40:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:40:11 localhost podman[240144]: @ - - [23/Nov/2025:09:40:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17707 "" "Go-http-client/1.1"
Nov 23 04:40:11 localhost python3.9[264547]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:40:11 localhost nova_compute[230084]: 2025-11-23 09:40:11.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:11 localhost nova_compute[230084]: 2025-11-23 09:40:11.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:40:11 localhost nova_compute[230084]: 2025-11-23 09:40:11.548 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:40:11 localhost nova_compute[230084]: 2025-11-23 09:40:11.574 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:40:11 localhost nova_compute[230084]: 2025-11-23 09:40:11.574 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:12 localhost nova_compute[230084]: 2025-11-23 09:40:12.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:12 localhost python3.9[264659]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.571 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.571 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.571 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.571 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:40:13 localhost nova_compute[230084]: 2025-11-23 09:40:13.572 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:40:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:40:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:40:13 localhost systemd[1]: tmp-crun.C7eitd.mount: Deactivated successfully.
Nov 23 04:40:13 localhost podman[264792]: 2025-11-23 09:40:13.821936988 +0000 UTC m=+0.090884484 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 23 04:40:13 localhost podman[264793]: 2025-11-23 09:40:13.868239016 +0000 UTC m=+0.133225725 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:40:13 localhost podman[264793]: 2025-11-23 09:40:13.875764558 +0000 UTC m=+0.140751207 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:40:13 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:40:13 localhost podman[264792]: 2025-11-23 09:40:13.901232029 +0000 UTC m=+0.170179505 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:40:13 localhost python3.9[264791]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:40:13 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:40:13 localhost network[264849]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:40:13 localhost network[264850]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:40:13 localhost network[264851]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.041 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.294 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.295 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12465MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.296 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.296 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.423 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.424 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.501 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.588 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.588 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.614 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.679 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: HW_CPU_X86_BMI,COMPUTE_NODE,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:40:14 localhost nova_compute[230084]: 2025-11-23 09:40:14.706 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:40:15 localhost nova_compute[230084]: 2025-11-23 09:40:15.181 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:40:15 localhost nova_compute[230084]: 2025-11-23 09:40:15.187 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:40:15 localhost nova_compute[230084]: 2025-11-23 09:40:15.216 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:40:15 localhost nova_compute[230084]: 2025-11-23 09:40:15.219 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:40:15 localhost nova_compute[230084]: 2025-11-23 09:40:15.220 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:40:15 localhost sshd[264955]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:40:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:40:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:40:17 localhost systemd[1]: tmp-crun.7Cw8FR.mount: Deactivated successfully.
Nov 23 04:40:17 localhost podman[265012]: 2025-11-23 09:40:17.728747133 +0000 UTC m=+0.089834655 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 04:40:17 localhost podman[265012]: 2025-11-23 09:40:17.763458072 +0000 UTC m=+0.124545614 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:40:17 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:40:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:40:20 localhost podman[265170]: 2025-11-23 09:40:20.185711805 +0000 UTC m=+0.089693571 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:40:20 localhost podman[265170]: 2025-11-23 09:40:20.192108266 +0000 UTC m=+0.096090012 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:40:20 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:40:20 localhost python3.9[265242]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 04:40:21 localhost python3.9[265352]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: ERROR   09:40:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: ERROR   09:40:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: ERROR   09:40:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: ERROR   09:40:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: ERROR   09:40:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:40:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:40:23 localhost python3.9[265462]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6061 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A54900000000001030307) 
Nov 23 04:40:23 localhost python3.9[265519]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6062 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A58990000000001030307) 
Nov 23 04:40:24 localhost python3.9[265629]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3197 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A5AD90000000001030307) 
Nov 23 04:40:25 localhost python3.9[265739]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:25 localhost python3.9[265849]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:40:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6063 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A609A0000000001030307) 
Nov 23 04:40:26 localhost python3.9[265961]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:40:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51449 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4156790931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A64D90000000001030307) 
Nov 23 04:40:27 localhost python3.9[266073]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:40:28 localhost python3.9[266184]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:28 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Nov 23 04:40:28 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:40:28 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:40:28 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:40:29 localhost python3.9[266295]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:30 localhost python3.9[266405]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6064 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A70590000000001030307) 
Nov 23 04:40:30 localhost python3.9[266515]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:31 localhost python3.9[266625]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:32 localhost python3.9[266735]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:40:34 localhost python3.9[266847]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:40:34 localhost podman[266958]: 2025-11-23 09:40:34.998741238 +0000 UTC m=+0.084140642 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 04:40:35 localhost podman[266958]: 2025-11-23 09:40:35.033372125 +0000 UTC m=+0.118771559 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Nov 23 04:40:35 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:40:35 localhost python3.9[266957]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:36 localhost python3.9[267033]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:40:36 localhost systemd[1]: tmp-crun.wjZGzX.mount: Deactivated successfully.
Nov 23 04:40:36 localhost podman[267144]: 2025-11-23 09:40:36.66149673 +0000 UTC m=+0.087000459 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350)
Nov 23 04:40:36 localhost systemd[1]: tmp-crun.dUZV7b.mount: Deactivated successfully.
Nov 23 04:40:36 localhost podman[267145]: 2025-11-23 09:40:36.711275272 +0000 UTC m=+0.135573069 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:40:36 localhost podman[267145]: 2025-11-23 09:40:36.722986195 +0000 UTC m=+0.147283962 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:40:36 localhost podman[267144]: 2025-11-23 09:40:36.730818015 +0000 UTC m=+0.156321744 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350)
Nov 23 04:40:36 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:40:36 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:40:36 localhost python3.9[267143]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:37 localhost python3.9[267241]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:38 localhost python3.9[267351]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6065 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759A90DA0000000001030307) 
Nov 23 04:40:39 localhost python3.9[267461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:39 localhost python3.9[267518]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:40 localhost python3.9[267628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:41 localhost python3.9[267685]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:41 localhost podman[240144]: time="2025-11-23T09:40:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:40:41 localhost podman[240144]: @ - - [23/Nov/2025:09:40:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:40:41 localhost podman[240144]: @ - - [23/Nov/2025:09:40:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1"
Nov 23 04:40:42 localhost python3.9[267795]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:40:42 localhost systemd[1]: Reloading.
Nov 23 04:40:42 localhost systemd-rc-local-generator[267816]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:40:42 localhost systemd-sysv-generator[267820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:43 localhost python3.9[267943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:43 localhost python3.9[268000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:40:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:40:44 localhost podman[268037]: 2025-11-23 09:40:44.184928634 +0000 UTC m=+0.091402176 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 04:40:44 localhost podman[268040]: 2025-11-23 09:40:44.229095556 +0000 UTC m=+0.135096726 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:40:44 localhost podman[268040]: 2025-11-23 09:40:44.237260235 +0000 UTC m=+0.143261445 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:40:44 localhost podman[268037]: 2025-11-23 09:40:44.247569731 +0000 UTC m=+0.154043263 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 23 04:40:44 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:40:44 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:40:44 localhost python3.9[268151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:45 localhost python3.9[268208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:45 localhost python3.9[268318]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:40:45 localhost systemd[1]: Reloading.
Nov 23 04:40:45 localhost systemd-sysv-generator[268344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:40:45 localhost systemd-rc-local-generator[268340]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:40:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:40:46 localhost systemd[1]: Starting Create netns directory...
Nov 23 04:40:46 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 23 04:40:46 localhost systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 23 04:40:46 localhost systemd[1]: Finished Create netns directory.
Nov 23 04:40:47 localhost python3.9[268469]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:47 localhost python3.9[268579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:40:48 localhost podman[268635]: 2025-11-23 09:40:48.176174439 +0000 UTC m=+0.078196033 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:40:48 localhost podman[268635]: 2025-11-23 09:40:48.266129036 +0000 UTC m=+0.168150630 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 23 04:40:48 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:40:48 localhost python3.9[268642]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:49 localhost python3.9[268770]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:40:50 localhost python3.9[268880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:40:50 localhost systemd[1]: tmp-crun.F4QggB.mount: Deactivated successfully.
Nov 23 04:40:50 localhost podman[268938]: 2025-11-23 09:40:50.567910206 +0000 UTC m=+0.091389967 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:40:50 localhost podman[268938]: 2025-11-23 09:40:50.579380013 +0000 UTC m=+0.102859754 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 04:40:50 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:40:50 localhost python3.9[268937]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.hulbvi9_ recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: ERROR   09:40:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: ERROR   09:40:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: ERROR   09:40:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: ERROR   09:40:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: ERROR   09:40:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:40:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:40:52 localhost python3.9[269065]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:40:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12147 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759AC9BF0000000001030307) 
Nov 23 04:40:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12148 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759ACDD90000000001030307) 
Nov 23 04:40:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6066 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759AD0D90000000001030307) 
Nov 23 04:40:55 localhost python3.9[269342]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 23 04:40:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12149 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759AD5DA0000000001030307) 
Nov 23 04:40:56 localhost python3.9[269452]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:40:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3198 DF PROTO=TCP SPT=39062 DPT=9102 SEQ=4250333200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759AD8DA0000000001030307) 
Nov 23 04:40:57 localhost python3.9[269562]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 23 04:41:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12150 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759AE5990000000001030307) 
Nov 23 04:41:02 localhost python3[269698]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:41:02 localhost python3[269698]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072",#012          "Digest": "sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:11:34.680484424Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 249489385,#012          "VirtualSize": 249489385,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/4f2d68ffc7b0d2ad7154f194ce01f6add8f68d1c87ebccb7dfe58b78cf788c91/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:d9e3e9c6b6b086eeb756b403557bba77ecef73e97936fb3285a5484cd95a1b1a"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:39.924297673Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:40.346524368Z",#012                 
Nov 23 04:41:02 localhost nova_compute[230084]: 2025-11-23 09:41:02.673 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:03 localhost python3.9[269868]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:41:04 localhost python3.9[269980]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:04 localhost python3.9[270035]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:41:05 localhost systemd[1]: tmp-crun.dk3NfN.mount: Deactivated successfully.
Nov 23 04:41:05 localhost podman[270088]: 2025-11-23 09:41:05.187136155 +0000 UTC m=+0.090137903 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Nov 23 04:41:05 localhost podman[270088]: 2025-11-23 09:41:05.224900345 +0000 UTC m=+0.127902023 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 04:41:05 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:41:05 localhost python3.9[270163]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890864.8669946-1367-152043116726428/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:06 localhost python3.9[270218]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:41:07 localhost podman[270239]: 2025-11-23 09:41:07.177136602 +0000 UTC m=+0.078494101 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:41:07 localhost podman[270239]: 2025-11-23 09:41:07.191131616 +0000 UTC m=+0.092489095 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:41:07 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:41:07 localhost systemd[1]: tmp-crun.DkA3vf.mount: Deactivated successfully.
Nov 23 04:41:07 localhost podman[270238]: 2025-11-23 09:41:07.289819217 +0000 UTC m=+0.194672410 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 04:41:07 localhost podman[270238]: 2025-11-23 09:41:07.300153514 +0000 UTC m=+0.205006787 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Nov 23 04:41:07 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:41:08 localhost python3.9[270370]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:41:08 localhost nova_compute[230084]: 2025-11-23 09:41:08.570 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:08 localhost nova_compute[230084]: 2025-11-23 09:41:08.571 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12151 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B06DA0000000001030307) 
Nov 23 04:41:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:41:09.244 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:41:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:41:09.245 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:41:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:41:09.245 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:41:09 localhost python3.9[270480]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:41:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:41:10 localhost nova_compute[230084]: 2025-11-23 09:41:10.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:10 localhost nova_compute[230084]: 2025-11-23 09:41:10.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:10 localhost nova_compute[230084]: 2025-11-23 09:41:10.546 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:41:10 localhost python3.9[270590]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 23 04:41:11 localhost podman[240144]: time="2025-11-23T09:41:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:41:11 localhost podman[240144]: @ - - [23/Nov/2025:09:41:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:41:11 localhost podman[240144]: @ - - [23/Nov/2025:09:41:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1"
Nov 23 04:41:11 localhost nova_compute[230084]: 2025-11-23 09:41:11.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:11 localhost python3.9[270700]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 23 04:41:12 localhost python3.9[270810]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:41:12 localhost nova_compute[230084]: 2025-11-23 09:41:12.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:12 localhost nova_compute[230084]: 2025-11-23 09:41:12.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:41:12 localhost nova_compute[230084]: 2025-11-23 09:41:12.547 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:41:12 localhost nova_compute[230084]: 2025-11-23 09:41:12.562 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:41:12 localhost nova_compute[230084]: 2025-11-23 09:41:12.562 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:12 localhost python3.9[270867]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.566 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.566 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.567 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.567 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:41:13 localhost nova_compute[230084]: 2025-11-23 09:41:13.567 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:41:13 localhost python3.9[270977]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.021 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.200 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.201 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12496MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.201 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.202 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.255 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.256 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.287 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:41:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:41:14 localhost systemd[1]: tmp-crun.4TQJny.mount: Deactivated successfully.
Nov 23 04:41:14 localhost podman[271130]: 2025-11-23 09:41:14.545084916 +0000 UTC m=+0.096673718 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:41:14 localhost podman[271130]: 2025-11-23 09:41:14.579144598 +0000 UTC m=+0.130733390 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:41:14 localhost podman[271131]: 2025-11-23 09:41:14.586384002 +0000 UTC m=+0.135825546 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:41:14 localhost podman[271131]: 2025-11-23 09:41:14.591429207 +0000 UTC m=+0.140870771 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:41:14 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:41:14 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:41:14 localhost python3.9[271129]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.738 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.744 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.757 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.760 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:41:14 localhost nova_compute[230084]: 2025-11-23 09:41:14.760 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:41:15 localhost nova_compute[230084]: 2025-11-23 09:41:15.761 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:41:17 localhost sshd[271223]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:41:18 localhost python3.9[271388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 23 04:41:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:41:19 localhost podman[271430]: 2025-11-23 09:41:19.167209242 +0000 UTC m=+0.071612888 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:41:19 localhost podman[271430]: 2025-11-23 09:41:19.234282337 +0000 UTC m=+0.138685973 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:41:19 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:41:19 localhost python3.9[271549]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:41:20 localhost podman[271659]: 2025-11-23 09:41:20.788703369 +0000 UTC m=+0.092267580 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:41:20 localhost podman[271659]: 2025-11-23 09:41:20.804121161 +0000 UTC m=+0.107685382 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 04:41:20 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:41:21 localhost python3.9[271660]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:41:21 localhost systemd[1]: Reloading.
Nov 23 04:41:21 localhost systemd-rc-local-generator[271719]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:41:21 localhost systemd-sysv-generator[271725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:22 localhost python3.9[271840]: ansible-ansible.builtin.service_facts Invoked
Nov 23 04:41:22 localhost network[271857]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 23 04:41:22 localhost network[271858]: 'network-scripts' will be removed from distribution in near future.
Nov 23 04:41:22 localhost network[271859]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: ERROR   09:41:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: ERROR   09:41:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: ERROR   09:41:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: ERROR   09:41:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: ERROR   09:41:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:41:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:41:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48900 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B3EF00000000001030307) 
Nov 23 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:41:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48901 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B42D90000000001030307) 
Nov 23 04:41:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12152 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B46D90000000001030307) 
Nov 23 04:41:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48902 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B4AD90000000001030307) 
Nov 23 04:41:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6067 DF PROTO=TCP SPT=52240 DPT=9102 SEQ=2346233626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B4ED90000000001030307) 
Nov 23 04:41:27 localhost sshd[272019]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:41:28 localhost python3.9[272094]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:28 localhost python3.9[272205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:29 localhost python3.9[272316]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48903 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B5A9A0000000001030307) 
Nov 23 04:41:31 localhost python3.9[272428]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:32 localhost python3.9[272539]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:32 localhost python3.9[272650]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:33 localhost python3.9[272761]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:34 localhost python3.9[272872]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:41:36 localhost podman[272891]: 2025-11-23 09:41:36.190467618 +0000 UTC m=+0.087880273 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 23 04:41:36 localhost podman[272891]: 2025-11-23 09:41:36.204027131 +0000 UTC m=+0.101439716 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Nov 23 04:41:36 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:41:37 localhost python3.9[273002]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:41:37 localhost podman[273058]: 2025-11-23 09:41:37.487505583 +0000 UTC m=+0.091046457 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:41:37 localhost podman[273058]: 2025-11-23 09:41:37.498796085 +0000 UTC m=+0.102336979 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:41:37 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:41:37 localhost podman[273054]: 2025-11-23 09:41:37.545036942 +0000 UTC m=+0.150373864 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, version=9.6, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 04:41:37 localhost podman[273054]: 2025-11-23 09:41:37.557429884 +0000 UTC m=+0.162766836 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Nov 23 04:41:37 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:41:37 localhost python3.9[273156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:38 localhost python3.9[273266]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48904 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759B7ADA0000000001030307) 
Nov 23 04:41:39 localhost python3.9[273376]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:40 localhost python3.9[273486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:40 localhost python3.9[273596]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:41 localhost podman[240144]: time="2025-11-23T09:41:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:41:41 localhost podman[240144]: @ - - [23/Nov/2025:09:41:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:41:41 localhost podman[240144]: @ - - [23/Nov/2025:09:41:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17698 "" "Go-http-client/1.1"
Nov 23 04:41:41 localhost sshd[273614]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:41:42 localhost python3.9[273708]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:43 localhost python3.9[273818]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:44 localhost python3.9[273928]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:41:45 localhost podman[274038]: 2025-11-23 09:41:45.157759898 +0000 UTC m=+0.071838674 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 23 04:41:45 localhost podman[274038]: 2025-11-23 09:41:45.192889317 +0000 UTC m=+0.106968083 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:41:45 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:41:45 localhost podman[274039]: 2025-11-23 09:41:45.209931943 +0000 UTC m=+0.120104665 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:41:45 localhost podman[274039]: 2025-11-23 09:41:45.246140942 +0000 UTC m=+0.156313714 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:41:45 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:41:45 localhost python3.9[274045]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:45 localhost python3.9[274187]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:46 localhost python3.9[274297]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:46 localhost python3.9[274407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:47 localhost python3.9[274517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:48 localhost python3.9[274627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:48 localhost python3.9[274737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:41:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:41:49 localhost podman[274848]: 2025-11-23 09:41:49.620677323 +0000 UTC m=+0.061979629 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 23 04:41:49 localhost podman[274848]: 2025-11-23 09:41:49.655590297 +0000 UTC m=+0.096892643 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:41:49 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:41:49 localhost python3.9[274847]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:50 localhost python3.9[274981]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 23 04:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:41:51 localhost podman[275048]: 2025-11-23 09:41:51.185859933 +0000 UTC m=+0.093500953 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 23 04:41:51 localhost podman[275048]: 2025-11-23 09:41:51.195930562 +0000 UTC m=+0.103571592 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Nov 23 04:41:51 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:41:51 localhost python3.9[275108]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 23 04:41:51 localhost systemd[1]: Reloading.
Nov 23 04:41:51 localhost systemd-rc-local-generator[275134]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:41:51 localhost systemd-sysv-generator[275137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: ERROR   09:41:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: ERROR   09:41:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: ERROR   09:41:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: ERROR   09:41:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: ERROR   09:41:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:41:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:41:52 localhost python3.9[275254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26744 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BB4200000000001030307) 
Nov 23 04:41:53 localhost python3.9[275365]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:53 localhost python3.9[275476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26745 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BB8190000000001030307) 
Nov 23 04:41:54 localhost python3.9[275587]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48905 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BBAD90000000001030307) 
Nov 23 04:41:55 localhost sshd[275589]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:41:55 localhost python3.9[275700]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26746 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BC01A0000000001030307) 
Nov 23 04:41:56 localhost python3.9[275811]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:57 localhost python3.9[275922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:41:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12153 DF PROTO=TCP SPT=42250 DPT=9102 SEQ=2116090172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BC4D90000000001030307) 
Nov 23 04:41:58 localhost python3.9[276033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:42:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26747 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BCFD90000000001030307) 
Nov 23 04:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:42:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5640 writes, 24K keys, 5640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5640 writes, 724 syncs, 7.79 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:42:03 localhost python3.9[276144]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:04 localhost python3.9[276254]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:05 localhost python3.9[276364]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:05 localhost python3.9[276474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:42:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4929 writes, 22K keys, 4929 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4929 writes, 684 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:42:06 localhost python3.9[276584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:42:06 localhost podman[276695]: 2025-11-23 09:42:06.945002421 +0000 UTC m=+0.093098908 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Nov 23 04:42:06 localhost podman[276695]: 2025-11-23 09:42:06.957090232 +0000 UTC m=+0.105186709 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm)
Nov 23 04:42:06 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:42:07 localhost python3.9[276694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:07 localhost python3.9[276823]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:42:08 localhost systemd[1]: tmp-crun.HHAKRY.mount: Deactivated successfully.
Nov 23 04:42:08 localhost podman[276933]: 2025-11-23 09:42:08.193087947 +0000 UTC m=+0.093532640 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Nov 23 04:42:08 localhost podman[276934]: 2025-11-23 09:42:08.234348024 +0000 UTC m=+0.133195304 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:42:08 localhost podman[276934]: 2025-11-23 09:42:08.244887134 +0000 UTC m=+0.143734474 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:42:08 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:42:08 localhost podman[276933]: 2025-11-23 09:42:08.265513383 +0000 UTC m=+0.165958186 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Nov 23 04:42:08 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:42:08 localhost python3.9[276935]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26748 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759BF0D90000000001030307) 
Nov 23 04:42:09 localhost python3.9[277085]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:42:09.246 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:42:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:42:09.247 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:42:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:42:09.247 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:42:09 localhost nova_compute[230084]: 2025-11-23 09:42:09.547 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:10 localhost python3.9[277195]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:10 localhost nova_compute[230084]: 2025-11-23 09:42:10.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:10 localhost nova_compute[230084]: 2025-11-23 09:42:10.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:11 localhost podman[240144]: time="2025-11-23T09:42:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:42:11 localhost podman[240144]: @ - - [23/Nov/2025:09:42:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:42:11 localhost podman[240144]: @ - - [23/Nov/2025:09:42:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17704 "" "Go-http-client/1.1"
Nov 23 04:42:11 localhost nova_compute[230084]: 2025-11-23 09:42:11.542 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:11 localhost nova_compute[230084]: 2025-11-23 09:42:11.557 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:11 localhost nova_compute[230084]: 2025-11-23 09:42:11.558 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:11 localhost nova_compute[230084]: 2025-11-23 09:42:11.558 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:42:13 localhost nova_compute[230084]: 2025-11-23 09:42:13.548 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:13 localhost nova_compute[230084]: 2025-11-23 09:42:13.548 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:42:13 localhost nova_compute[230084]: 2025-11-23 09:42:13.549 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:42:13 localhost nova_compute[230084]: 2025-11-23 09:42:13.758 230088 DEBUG nova.compute.manager [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:42:13 localhost nova_compute[230084]: 2025-11-23 09:42:13.758 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.546 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.564 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.564 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.565 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.565 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:42:14 localhost nova_compute[230084]: 2025-11-23 09:42:14.566 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:14.999 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.242 230088 WARNING nova.virt.libvirt.driver [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.244 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12491MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.244 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.245 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.310 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.311 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.330 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.758 230088 DEBUG oslo_concurrency.processutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.763 230088 DEBUG nova.compute.provider_tree [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.776 230088 DEBUG nova.scheduler.client.report [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.779 230088 DEBUG nova.compute.resource_tracker [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:42:15 localhost nova_compute[230084]: 2025-11-23 09:42:15.779 230088 DEBUG oslo_concurrency.lockutils [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:42:16 localhost podman[277258]: 2025-11-23 09:42:16.177291318 +0000 UTC m=+0.083979295 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:42:16 localhost podman[277258]: 2025-11-23 09:42:16.189864422 +0000 UTC m=+0.096552439 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:42:16 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:42:16 localhost podman[277257]: 2025-11-23 09:42:16.27809902 +0000 UTC m=+0.185235720 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:42:16 localhost podman[277257]: 2025-11-23 09:42:16.285463715 +0000 UTC m=+0.192600385 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:42:16 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:42:17 localhost nova_compute[230084]: 2025-11-23 09:42:17.779 230088 DEBUG oslo_service.periodic_task [None req-ffd737ae-126c-42b4-bb26-628789df0ae8 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:42:18 localhost python3.9[277391]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 23 04:42:19 localhost sshd[277410]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:42:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:42:19 localhost systemd-logind[761]: New session 60 of user zuul.
Nov 23 04:42:19 localhost systemd[1]: Started Session 60 of User zuul.
Nov 23 04:42:19 localhost systemd[1]: tmp-crun.vkU4MT.mount: Deactivated successfully.
Nov 23 04:42:19 localhost podman[277412]: 2025-11-23 09:42:19.884638442 +0000 UTC m=+0.070740103 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:42:19 localhost podman[277412]: 2025-11-23 09:42:19.948913512 +0000 UTC m=+0.135015143 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:42:19 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:42:19 localhost systemd[1]: session-60.scope: Deactivated successfully.
Nov 23 04:42:19 localhost systemd-logind[761]: Session 60 logged out. Waiting for processes to exit.
Nov 23 04:42:20 localhost systemd-logind[761]: Removed session 60.
Nov 23 04:42:20 localhost python3.9[277544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:21 localhost python3.9[277646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890940.2681725-3040-202765246515415/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:21 localhost python3.9[277792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:42:22 localhost podman[277862]: 2025-11-23 09:42:22.178777328 +0000 UTC m=+0.081038107 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 23 04:42:22 localhost podman[277862]: 2025-11-23 09:42:22.192071802 +0000 UTC m=+0.094332591 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:42:22 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:42:22 localhost python3.9[277861]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: ERROR   09:42:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: ERROR   09:42:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: ERROR   09:42:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: ERROR   09:42:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: ERROR   09:42:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:42:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:42:22 localhost python3.9[277988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:23 localhost python3.9[278074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890942.3653145-3040-69133523570346/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12137 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C294F0000000001030307) 
Nov 23 04:42:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12138 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C2D590000000001030307) 
Nov 23 04:42:24 localhost sshd[278183]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:42:24 localhost python3.9[278182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26749 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C30D90000000001030307) 
Nov 23 04:42:25 localhost python3.9[278270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890944.2060115-3040-126615381922480/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=f7e1adb02ce1fc9821a25015c3baa66ad68c917c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:25 localhost python3.9[278396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12139 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C35590000000001030307) 
Nov 23 04:42:27 localhost python3.9[278482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890945.3838093-3040-102065054587019/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48906 DF PROTO=TCP SPT=43414 DPT=9102 SEQ=413821283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C38DA0000000001030307) 
Nov 23 04:42:27 localhost python3.9[278590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:28 localhost python3.9[278676]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763890947.2371898-3040-252821975321266/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:29 localhost python3.9[278786]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:42:30 localhost python3.9[278896]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:42:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12140 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C45190000000001030307) 
Nov 23 04:42:30 localhost python3.9[279006]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:31 localhost python3.9[279118]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:42:32 localhost python3.9[279226]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:33 localhost python3.9[279336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:33 localhost python3.9[279391]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:34 localhost python3.9[279499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 23 04:42:34 localhost python3.9[279554]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 23 04:42:35 localhost python3.9[279664]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 23 04:42:36 localhost python3.9[279774]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:42:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:42:37 localhost systemd[1]: tmp-crun.Co49EY.mount: Deactivated successfully.
Nov 23 04:42:37 localhost podman[279885]: 2025-11-23 09:42:37.201705205 +0000 UTC m=+0.105665623 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:42:37 localhost podman[279885]: 2025-11-23 09:42:37.212899972 +0000 UTC m=+0.116860350 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:42:37 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:42:37 localhost python3[279884]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:42:37 localhost python3[279884]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012          "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:33:31.011385583Z",#012          "Config": {#012               "User": "nova",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 1211770748,#012          "VirtualSize": 1211770748,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012                    "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012                    "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nova",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012 
Nov 23 04:42:38 localhost python3.9[280074]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12141 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C64D90000000001030307) 
Nov 23 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:42:39 localhost podman[280095]: 2025-11-23 09:42:39.220449683 +0000 UTC m=+0.091106714 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:42:39 localhost podman[280095]: 2025-11-23 09:42:39.235902905 +0000 UTC m=+0.106559876 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:42:39 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:42:39 localhost podman[280094]: 2025-11-23 09:42:39.324841701 +0000 UTC m=+0.195279857 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=)
Nov 23 04:42:39 localhost podman[280094]: 2025-11-23 09:42:39.366926621 +0000 UTC m=+0.237364747 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Nov 23 04:42:39 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:42:40 localhost python3.9[280227]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 23 04:42:40 localhost python3.9[280337]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 23 04:42:41 localhost podman[240144]: time="2025-11-23T09:42:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:42:41 localhost podman[240144]: @ - - [23/Nov/2025:09:42:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:42:41 localhost podman[240144]: @ - - [23/Nov/2025:09:42:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1"
Nov 23 04:42:42 localhost python3[280447]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 23 04:42:42 localhost python3[280447]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66",#012          "Digest": "sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-11-21T06:33:31.011385583Z",#012          "Config": {#012               "User": "nova",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251118",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 1211770748,#012          "VirtualSize": 1211770748,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/bba04dbd2e113925874f779bcae6bca7316b7a66f7b274a8553d6a317f393238/diff:/var/lib/containers/storage/overlay/0f9d0c5706eafcb7728267ea9fd9ea64ca261add44f9daa2b074c99c0b87c98c/diff:/var/lib/containers/storage/overlay/6e9f200c79821db3abfada9ff652f9bd648429ed9bddf6ca26f58a14a261f068/diff:/var/lib/containers/storage/overlay/ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/5d77d5f2ee8b05529dee3e4c9abd16672fdc0c09b8af133b2b88b75f7f780254/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:ccfb371f2e163f0c4b39cf6c44930e270547d620598331da99955639b81e1ba6",#012                    "sha256:573e98f577c8b1610c1485067040ff856a142394fcd22ad4cb9c66b7d1de6bef",#012                    "sha256:5a71e5d7d31f15255619cb8b9384b708744757c93993652418b0f45b0c0931d5",#012                    "sha256:b9b598b1eb0c08906fe1bc9a64fc0e72719a6197d83669d2eb4309e69a00aa62",#012                    "sha256:33e3811ab7487b27336fdf94252d5a875b17efb438cbc4ffc943f851ad3eceb6"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251118",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "7b76510d5d5adf2ccf627d29bb9dae76",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nova",#012          "History": [#012               {#012                    "created": "2025-11-18T01:56:49.795434035Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6d427dd138d2b0977a7ef7feaa8bd82d04e99cc5f4a16d555d6cff0cb52d43c6 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:49.795512415Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251118\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-18T01:56:52.547242013Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947310748Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947327778Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947358359Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.947372589Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94738527Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:01.94739397Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:02.324930938Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-21T06:10:36.349393468Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012 
Nov 23 04:42:43 localhost python3.9[280615]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:43 localhost sshd[280617]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:42:44 localhost python3.9[280729]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:42:45 localhost python3.9[280838]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763890964.5044544-3717-198445460454856/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:42:45 localhost python3.9[280893]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:42:47 localhost python3.9[281003]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:42:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:42:47 localhost podman[281004]: 2025-11-23 09:42:47.196724813 +0000 UTC m=+0.097840464 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:42:47 localhost podman[281005]: 2025-11-23 09:42:47.255415264 +0000 UTC m=+0.155056156 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:42:47 localhost podman[281005]: 2025-11-23 09:42:47.268916323 +0000 UTC m=+0.168557215 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:42:47 localhost podman[281004]: 2025-11-23 09:42:47.278144869 +0000 UTC m=+0.179260580 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 04:42:47 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:42:47 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:42:47 localhost python3.9[281154]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:49 localhost python3.9[281262]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 23 04:42:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:42:50 localhost podman[281355]: 2025-11-23 09:42:50.175826962 +0000 UTC m=+0.080646356 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:42:50 localhost podman[281355]: 2025-11-23 09:42:50.228390401 +0000 UTC m=+0.133209805 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 23 04:42:50 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:42:50 localhost python3.9[281386]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 04:42:50 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 103.9 (346 of 333 items), suggesting rotation.
Nov 23 04:42:50 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:42:50 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:42:50 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:42:51 localhost python3.9[281532]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 23 04:42:51 localhost systemd[1]: Stopping nova_compute container...
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: ERROR   09:42:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: ERROR   09:42:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: ERROR   09:42:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: ERROR   09:42:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: ERROR   09:42:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:42:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:42:53 localhost systemd[1]: tmp-crun.WPF6l7.mount: Deactivated successfully.
Nov 23 04:42:53 localhost podman[281550]: 2025-11-23 09:42:53.19319285 +0000 UTC m=+0.094879845 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:42:53 localhost podman[281550]: 2025-11-23 09:42:53.232984379 +0000 UTC m=+0.134671324 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:42:53 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:42:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21462 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759C9E800000000001030307) 
Nov 23 04:42:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21463 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CA2990000000001030307) 
Nov 23 04:42:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12142 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CA4D90000000001030307) 
Nov 23 04:42:55 localhost nova_compute[230084]: 2025-11-23 09:42:55.034 230088 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m
Nov 23 04:42:55 localhost nova_compute[230084]: 2025-11-23 09:42:55.037 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:42:55 localhost nova_compute[230084]: 2025-11-23 09:42:55.037 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:42:55 localhost nova_compute[230084]: 2025-11-23 09:42:55.037 230088 DEBUG oslo_concurrency.lockutils [None req-15f9b66f-80f2-482f-84cc-5293f3958333 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:42:55 localhost journal[229448]: End of file while reading data: Input/output error
Nov 23 04:42:55 localhost systemd[1]: libpod-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e.scope: Deactivated successfully.
Nov 23 04:42:55 localhost systemd[1]: libpod-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e.scope: Consumed 17.790s CPU time.
Nov 23 04:42:55 localhost podman[281536]: 2025-11-23 09:42:55.502089169 +0000 UTC m=+3.919478710 container died 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:42:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e-userdata-shm.mount: Deactivated successfully.
Nov 23 04:42:55 localhost systemd[1]: var-lib-containers-storage-overlay-cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c-merged.mount: Deactivated successfully.
Nov 23 04:42:55 localhost podman[281536]: 2025-11-23 09:42:55.67541374 +0000 UTC m=+4.092803231 container cleanup 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 04:42:55 localhost podman[281536]: nova_compute
Nov 23 04:42:55 localhost podman[281597]: error opening file `/run/crun/3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e/status`: No such file or directory
Nov 23 04:42:55 localhost podman[281585]: 2025-11-23 09:42:55.776047038 +0000 UTC m=+0.068689638 container cleanup 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm)
Nov 23 04:42:55 localhost podman[281585]: nova_compute
Nov 23 04:42:55 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 23 04:42:55 localhost systemd[1]: Stopped nova_compute container.
Nov 23 04:42:55 localhost systemd[1]: Starting nova_compute container...
Nov 23 04:42:55 localhost systemd[1]: Started libcrun container.
Nov 23 04:42:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf46f9b526a956be87da625d1390fecc07add46a0ed35b94ae15541e650a687c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:55 localhost podman[281599]: 2025-11-23 09:42:55.922880054 +0000 UTC m=+0.110911462 container init 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 23 04:42:55 localhost podman[281599]: 2025-11-23 09:42:55.933006253 +0000 UTC m=+0.121037661 container start 3c50f7880eed5dd3bed8ed00f95ba96aef9e4dd6f1b3cd18d7bb3c3d9618040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Nov 23 04:42:55 localhost podman[281599]: nova_compute
Nov 23 04:42:55 localhost nova_compute[281613]: + sudo -E kolla_set_configs
Nov 23 04:42:55 localhost systemd[1]: Started nova_compute container.
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Validating config file
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying service configuration files
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /etc/ceph
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Creating directory /etc/ceph
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/ceph
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Writing out command to execute
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:42:56 localhost nova_compute[281613]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 23 04:42:56 localhost nova_compute[281613]: ++ cat /run_command
Nov 23 04:42:56 localhost nova_compute[281613]: + CMD=nova-compute
Nov 23 04:42:56 localhost nova_compute[281613]: + ARGS=
Nov 23 04:42:56 localhost nova_compute[281613]: + sudo kolla_copy_cacerts
Nov 23 04:42:56 localhost nova_compute[281613]: + [[ ! -n '' ]]
Nov 23 04:42:56 localhost nova_compute[281613]: + . kolla_extend_start
Nov 23 04:42:56 localhost nova_compute[281613]: + echo 'Running command: '\''nova-compute'\'''
Nov 23 04:42:56 localhost nova_compute[281613]: Running command: 'nova-compute'
Nov 23 04:42:56 localhost nova_compute[281613]: + umask 0022
Nov 23 04:42:56 localhost nova_compute[281613]: + exec nova-compute
Nov 23 04:42:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21464 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CAA990000000001030307) 
Nov 23 04:42:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26750 DF PROTO=TCP SPT=49902 DPT=9102 SEQ=1855165335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CAED90000000001030307) 
Nov 23 04:42:57 localhost nova_compute[281613]: 2025-11-23 09:42:57.870 281617 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:42:57 localhost nova_compute[281613]: 2025-11-23 09:42:57.870 281617 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:42:57 localhost nova_compute[281613]: 2025-11-23 09:42:57.871 281617 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 23 04:42:57 localhost nova_compute[281613]: 2025-11-23 09:42:57.871 281617 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 23 04:42:57 localhost nova_compute[281613]: 2025-11-23 09:42:57.985 281617 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.010 281617 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.010 281617 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 23 04:42:58 localhost python3.9[281737]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 23 04:42:58 localhost systemd[1]: Started libpod-conmon-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope.
Nov 23 04:42:58 localhost systemd[1]: Started libcrun container.
Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 23 04:42:58 localhost podman[281764]: 2025-11-23 09:42:58.396573037 +0000 UTC m=+0.147242358 container init 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:42:58 localhost podman[281764]: 2025-11-23 09:42:58.412401948 +0000 UTC m=+0.163071249 container start 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=nova_compute_init)
Nov 23 04:42:58 localhost python3.9[281737]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Applying nova statedir ownership
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/4143dbbec5b08621aa3c8eb364f8a7d3e97604e18b7ed41c4bab0da11ed561fd
Nov 23 04:42:58 localhost nova_compute_init[281784]: INFO:nova_statedir:Nova statedir ownership complete
Nov 23 04:42:58 localhost systemd[1]: libpod-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope: Deactivated successfully.
Nov 23 04:42:58 localhost podman[281797]: 2025-11-23 09:42:58.551976002 +0000 UTC m=+0.054440730 container died 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.554 281617 INFO nova.virt.driver [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 23 04:42:58 localhost podman[281797]: 2025-11-23 09:42:58.581067795 +0000 UTC m=+0.083532523 container cleanup 227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=nova_compute_init, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:42:58 localhost systemd[1]: libpod-conmon-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec.scope: Deactivated successfully.
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.669 281617 INFO nova.compute.provider_config [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.743 281617 DEBUG oslo_concurrency.lockutils [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.744 281617 DEBUG oslo_concurrency.lockutils [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.745 281617 DEBUG oslo_concurrency.lockutils [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.745 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.746 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.746 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.746 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.746 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.746 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.747 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.747 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.747 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.747 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.747 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.748 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.748 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.748 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.748 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.748 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.749 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.749 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.749 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.749 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] console_host                   = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.749 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.750 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.750 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.750 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.750 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.750 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.751 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.751 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.751 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.751 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.751 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.752 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.752 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.752 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.752 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.752 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.753 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.753 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.753 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] host                           = np0005532586.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.753 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.754 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.754 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.754 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.754 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.754 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.755 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.755 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.755 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.755 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.755 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.756 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.756 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.756 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.756 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.756 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.757 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.758 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.758 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.758 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.758 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.758 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.759 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.760 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.760 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.760 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.761 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.761 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.761 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.761 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.761 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.762 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.762 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.762 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.762 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.762 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.763 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.763 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.763 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.763 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.763 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.764 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.764 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.764 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.764 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.764 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.765 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.765 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.765 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.765 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.765 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.766 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.767 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.767 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.767 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.767 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.767 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.768 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.768 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.768 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.768 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.768 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.769 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.769 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.769 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.769 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.769 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.770 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.771 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.771 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.771 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.771 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.771 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.772 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.773 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.773 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.773 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.773 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.773 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.774 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.774 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.774 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.774 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.775 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.775 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.775 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.775 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.776 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.776 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.776 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.776 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.776 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.777 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.777 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.777 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.777 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.777 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.778 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.779 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.779 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.779 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.779 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.779 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.780 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.780 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.780 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.780 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.780 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.781 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.781 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.781 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.781 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.781 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.782 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.782 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.782 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.782 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.782 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.783 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.783 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.783 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.783 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.783 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.784 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.784 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.784 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.784 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.784 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.785 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.785 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.785 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.785 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.785 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.786 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.786 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.786 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.786 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.786 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.787 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.787 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.787 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.787 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.787 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.788 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.788 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.788 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.788 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.788 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.789 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.790 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.790 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.790 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.790 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.791 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.791 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.791 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.791 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.791 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.792 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.792 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.792 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.792 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.792 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.793 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.793 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.793 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.793 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.793 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.794 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.794 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.794 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.794 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.794 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.795 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.795 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.795 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.795 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.795 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.796 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.797 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.797 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.797 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.797 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.797 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.798 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.799 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.800 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.800 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.800 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.800 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.800 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.801 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.802 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.803 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.804 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.805 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.806 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.807 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.808 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.809 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.810 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.811 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.812 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.813 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.814 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.815 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.816 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.817 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.818 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.819 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.820 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.821 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.822 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.823 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.824 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.825 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.826 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.827 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.828 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.829 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 WARNING oslo_config.cfg [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 23 04:42:58 localhost nova_compute[281613]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 23 04:42:58 localhost nova_compute[281613]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 23 04:42:58 localhost nova_compute[281613]: and ``live_migration_inbound_addr`` respectively.
Nov 23 04:42:58 localhost nova_compute[281613]: ).  Its value may be silently ignored in the future.#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.830 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.831 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.832 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rbd_secret_uuid        = 46550e70-79cb-5f55-bf6d-1204b97e083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.833 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.834 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.835 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.835 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.836 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.836 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.836 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.836 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.836 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.837 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.838 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.839 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.840 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.841 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.842 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.843 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.844 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.845 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.846 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.847 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.848 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.849 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.850 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.851 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.852 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.853 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.854 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.855 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.856 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.857 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.858 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.859 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.860 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.861 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.862 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.863 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.864 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.865 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.866 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.867 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.868 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.869 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.870 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.871 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.872 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.873 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.874 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.875 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.876 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.877 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.878 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.879 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.880 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.881 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.882 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.883 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.884 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.885 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.886 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.887 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.888 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.889 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.890 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.891 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.891 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.891 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.891 281617 DEBUG oslo_service.service [None req-344cfaf6-1a13-46bb-8506-ad9a4e3809db - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.892 281617 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.905 281617 INFO nova.virt.node [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.906 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.906 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.906 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.907 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.918 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4c5dcd2a90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.921 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4c5dcd2a90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.922 281617 INFO nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.929 281617 INFO nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Libvirt host capabilities <capabilities>
Nov 23 04:42:58 localhost nova_compute[281613]: 
Nov 23 04:42:58 localhost nova_compute[281613]:  <host>
Nov 23 04:42:58 localhost nova_compute[281613]:    <uuid>94eff25b-7070-4dc8-8cfe-491426a98db3</uuid>
Nov 23 04:42:58 localhost nova_compute[281613]:    <cpu>
Nov 23 04:42:58 localhost nova_compute[281613]:      <arch>x86_64</arch>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model>EPYC-Rome-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <vendor>AMD</vendor>
Nov 23 04:42:58 localhost nova_compute[281613]:      <microcode version='16777317'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <signature family='23' model='49' stepping='0'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='x2apic'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='tsc-deadline'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='osxsave'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='hypervisor'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='tsc_adjust'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='spec-ctrl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='stibp'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='arch-capabilities'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='ssbd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='cmp_legacy'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='topoext'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='virt-ssbd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='lbrv'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='tsc-scale'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='vmcb-clean'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='pause-filter'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='pfthreshold'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='svme-addr-chk'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='rdctl-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='skip-l1dfl-vmentry'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='mds-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature name='pschange-mc-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <pages unit='KiB' size='4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <pages unit='KiB' size='2048'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <pages unit='KiB' size='1048576'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </cpu>
Nov 23 04:42:58 localhost nova_compute[281613]:    <power_management>
Nov 23 04:42:58 localhost nova_compute[281613]:      <suspend_mem/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <suspend_disk/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <suspend_hybrid/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </power_management>
Nov 23 04:42:58 localhost nova_compute[281613]:    <iommu support='no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    <migration_features>
Nov 23 04:42:58 localhost nova_compute[281613]:      <live/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <uri_transports>
Nov 23 04:42:58 localhost nova_compute[281613]:        <uri_transport>tcp</uri_transport>
Nov 23 04:42:58 localhost nova_compute[281613]:        <uri_transport>rdma</uri_transport>
Nov 23 04:42:58 localhost nova_compute[281613]:      </uri_transports>
Nov 23 04:42:58 localhost nova_compute[281613]:    </migration_features>
Nov 23 04:42:58 localhost nova_compute[281613]:    <topology>
Nov 23 04:42:58 localhost nova_compute[281613]:      <cells num='1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <cell id='0'>
Nov 23 04:42:58 localhost nova_compute[281613]:          <memory unit='KiB'>16116604</memory>
Nov 23 04:42:58 localhost nova_compute[281613]:          <pages unit='KiB' size='4'>4029151</pages>
Nov 23 04:42:58 localhost nova_compute[281613]:          <pages unit='KiB' size='2048'>0</pages>
Nov 23 04:42:58 localhost nova_compute[281613]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 23 04:42:58 localhost nova_compute[281613]:          <distances>
Nov 23 04:42:58 localhost nova_compute[281613]:            <sibling id='0' value='10'/>
Nov 23 04:42:58 localhost nova_compute[281613]:          </distances>
Nov 23 04:42:58 localhost nova_compute[281613]:          <cpus num='8'>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 23 04:42:58 localhost nova_compute[281613]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 23 04:42:58 localhost nova_compute[281613]:          </cpus>
Nov 23 04:42:58 localhost nova_compute[281613]:        </cell>
Nov 23 04:42:58 localhost nova_compute[281613]:      </cells>
Nov 23 04:42:58 localhost nova_compute[281613]:    </topology>
Nov 23 04:42:58 localhost nova_compute[281613]:    <cache>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </cache>
Nov 23 04:42:58 localhost nova_compute[281613]:    <secmodel>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model>selinux</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <doi>0</doi>
Nov 23 04:42:58 localhost nova_compute[281613]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 23 04:42:58 localhost nova_compute[281613]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 23 04:42:58 localhost nova_compute[281613]:    </secmodel>
Nov 23 04:42:58 localhost nova_compute[281613]:    <secmodel>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model>dac</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <doi>0</doi>
Nov 23 04:42:58 localhost nova_compute[281613]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 23 04:42:58 localhost nova_compute[281613]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 23 04:42:58 localhost nova_compute[281613]:    </secmodel>
Nov 23 04:42:58 localhost nova_compute[281613]:  </host>
Nov 23 04:42:58 localhost nova_compute[281613]: 
Nov 23 04:42:58 localhost nova_compute[281613]:  <guest>
Nov 23 04:42:58 localhost nova_compute[281613]:    <os_type>hvm</os_type>
Nov 23 04:42:58 localhost nova_compute[281613]:    <arch name='i686'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <wordsize>32</wordsize>
Nov 23 04:42:58 localhost nova_compute[281613]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <domain type='qemu'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <domain type='kvm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </arch>
Nov 23 04:42:58 localhost nova_compute[281613]:    <features>
Nov 23 04:42:58 localhost nova_compute[281613]:      <pae/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <nonpae/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <acpi default='on' toggle='yes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <apic default='on' toggle='no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <cpuselection/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <deviceboot/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <externalSnapshot/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </features>
Nov 23 04:42:58 localhost nova_compute[281613]:  </guest>
Nov 23 04:42:58 localhost nova_compute[281613]: 
Nov 23 04:42:58 localhost nova_compute[281613]:  <guest>
Nov 23 04:42:58 localhost nova_compute[281613]:    <os_type>hvm</os_type>
Nov 23 04:42:58 localhost nova_compute[281613]:    <arch name='x86_64'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <wordsize>64</wordsize>
Nov 23 04:42:58 localhost nova_compute[281613]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:      <domain type='qemu'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <domain type='kvm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </arch>
Nov 23 04:42:58 localhost nova_compute[281613]:    <features>
Nov 23 04:42:58 localhost nova_compute[281613]:      <acpi default='on' toggle='yes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <apic default='on' toggle='no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <cpuselection/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <deviceboot/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <disksnapshot default='on' toggle='no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <externalSnapshot/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </features>
Nov 23 04:42:58 localhost nova_compute[281613]:  </guest>
Nov 23 04:42:58 localhost nova_compute[281613]: 
Nov 23 04:42:58 localhost nova_compute[281613]: </capabilities>
Nov 23 04:42:58 localhost nova_compute[281613]: #033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.935 281617 DEBUG nova.virt.libvirt.volume.mount [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.937 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:42:58 localhost nova_compute[281613]: 2025-11-23 09:42:58.942 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 23 04:42:58 localhost nova_compute[281613]: <domainCapabilities>
Nov 23 04:42:58 localhost nova_compute[281613]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:42:58 localhost nova_compute[281613]:  <domain>kvm</domain>
Nov 23 04:42:58 localhost nova_compute[281613]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:42:58 localhost nova_compute[281613]:  <arch>i686</arch>
Nov 23 04:42:58 localhost nova_compute[281613]:  <vcpu max='1024'/>
Nov 23 04:42:58 localhost nova_compute[281613]:  <iothreads supported='yes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:  <os supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:    <enum name='firmware'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    <loader supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>rom</value>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>pflash</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:58 localhost nova_compute[281613]:      <enum name='readonly'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>yes</value>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:58 localhost nova_compute[281613]:      <enum name='secure'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:58 localhost nova_compute[281613]:    </loader>
Nov 23 04:42:58 localhost nova_compute[281613]:  </os>
Nov 23 04:42:58 localhost nova_compute[281613]:  <cpu>
Nov 23 04:42:58 localhost nova_compute[281613]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:58 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:58 localhost nova_compute[281613]:    <mode name='maximum' supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <enum name='maximumMigratable'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:58 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:58 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:58 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:58 localhost nova_compute[281613]:    <mode name='host-model' supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <vendor>AMD</vendor>
Nov 23 04:42:58 localhost nova_compute[281613]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='x2apic'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='stibp'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='ssbd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='succor'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='ibrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='lbrv'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:58 localhost nova_compute[281613]:    <mode name='custom' supported='yes'>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Broadwell-v4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cooperlake'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cooperlake-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Cooperlake-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Denverton'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Denverton-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Denverton-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Denverton-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Dhyana-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Milan'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Rome'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='EPYC-v4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='GraniteRapids'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx10'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx10-128'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx10-256'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx10-512'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-IBRS'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Haswell-v4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='IvyBridge'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='IvyBridge-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='IvyBridge-v2'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='KnightsMill'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='KnightsMill-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Opteron_G4'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Opteron_G5'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:58 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:42:58 localhost nova_compute[281613]:      <blockers model='SapphireRapids'>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:58 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:  </cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:  <memoryBacking supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='sourceType'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>anonymous</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>memfd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:    </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:  </memoryBacking>
Nov 23 04:42:59 localhost nova_compute[281613]:  <devices>
Nov 23 04:42:59 localhost nova_compute[281613]:    <disk supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='diskDevice'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>disk</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cdrom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>floppy</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>lun</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>fdc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>sata</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </disk>
Nov 23 04:42:59 localhost nova_compute[281613]:    <graphics supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vnc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egl-headless</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </graphics>
Nov 23 04:42:59 localhost nova_compute[281613]:    <video supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='modelType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vga</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cirrus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>none</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>bochs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ramfb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </video>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hostdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='mode'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>subsystem</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='startupPolicy'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>mandatory</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>requisite</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>optional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='subsysType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pci</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='capsType'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='pciBackend'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hostdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <rng supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>random</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </rng>
Nov 23 04:42:59 localhost nova_compute[281613]:    <filesystem supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='driverType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>path</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>handle</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtiofs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </filesystem>
Nov 23 04:42:59 localhost nova_compute[281613]:    <tpm supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-tis</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-crb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emulator</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>external</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendVersion'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>2.0</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </tpm>
Nov 23 04:42:59 localhost nova_compute[281613]:    <redirdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </redirdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <channel supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </channel>
Nov 23 04:42:59 localhost nova_compute[281613]:    <crypto supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </crypto>
Nov 23 04:42:59 localhost nova_compute[281613]:    <interface supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>passt</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </interface>
Nov 23 04:42:59 localhost nova_compute[281613]:    <panic supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>isa</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>hyperv</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </panic>
Nov 23 04:42:59 localhost nova_compute[281613]:    <console supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>null</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dev</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pipe</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stdio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>udp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tcp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu-vdagent</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </console>
Nov 23 04:42:59 localhost nova_compute[281613]:  </devices>
Nov 23 04:42:59 localhost nova_compute[281613]:  <features>
Nov 23 04:42:59 localhost nova_compute[281613]:    <gic supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <vmcoreinfo supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <genid supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backingStoreInput supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backup supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <async-teardown supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <ps2 supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sev supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sgx supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hyperv supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='features'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>relaxed</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vapic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>spinlocks</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vpindex</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>runtime</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>synic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stimer</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reset</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vendor_id</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>frequencies</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reenlightenment</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tlbflush</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ipi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>avic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emsr_bitmap</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>xmm_input</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:        <spinlocks>4095</spinlocks>
Nov 23 04:42:59 localhost nova_compute[281613]:        <stimer_direct>on</stimer_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:42:59 localhost nova_compute[281613]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:42:59 localhost nova_compute[281613]:      </defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hyperv>
Nov 23 04:42:59 localhost nova_compute[281613]:    <launchSecurity supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='sectype'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tdx</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </launchSecurity>
Nov 23 04:42:59 localhost nova_compute[281613]:  </features>
Nov 23 04:42:59 localhost nova_compute[281613]: </domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:58.954 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 23 04:42:59 localhost nova_compute[281613]: <domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:42:59 localhost nova_compute[281613]:  <domain>kvm</domain>
Nov 23 04:42:59 localhost nova_compute[281613]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:42:59 localhost nova_compute[281613]:  <arch>i686</arch>
Nov 23 04:42:59 localhost nova_compute[281613]:  <vcpu max='240'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <iothreads supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <os supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='firmware'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <loader supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>rom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pflash</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='readonly'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>yes</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='secure'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </loader>
Nov 23 04:42:59 localhost nova_compute[281613]:  </os>
Nov 23 04:42:59 localhost nova_compute[281613]:  <cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='maximum' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='maximumMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-model' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <vendor>AMD</vendor>
Nov 23 04:42:59 localhost nova_compute[281613]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='x2apic'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='stibp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='succor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lbrv'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='custom' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Dhyana-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-128'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-256'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-512'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:  </cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:  <memoryBacking supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='sourceType'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>anonymous</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>memfd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:    </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:  </memoryBacking>
Nov 23 04:42:59 localhost nova_compute[281613]:  <devices>
Nov 23 04:42:59 localhost nova_compute[281613]:    <disk supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='diskDevice'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>disk</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cdrom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>floppy</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>lun</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ide</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>fdc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>sata</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </disk>
Nov 23 04:42:59 localhost nova_compute[281613]:    <graphics supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vnc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egl-headless</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </graphics>
Nov 23 04:42:59 localhost nova_compute[281613]:    <video supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='modelType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vga</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cirrus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>none</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>bochs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ramfb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </video>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hostdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='mode'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>subsystem</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='startupPolicy'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>mandatory</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>requisite</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>optional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='subsysType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pci</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='capsType'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='pciBackend'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hostdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <rng supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>random</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </rng>
Nov 23 04:42:59 localhost nova_compute[281613]:    <filesystem supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='driverType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>path</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>handle</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtiofs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </filesystem>
Nov 23 04:42:59 localhost nova_compute[281613]:    <tpm supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-tis</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-crb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emulator</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>external</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendVersion'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>2.0</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </tpm>
Nov 23 04:42:59 localhost nova_compute[281613]:    <redirdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </redirdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <channel supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </channel>
Nov 23 04:42:59 localhost nova_compute[281613]:    <crypto supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </crypto>
Nov 23 04:42:59 localhost nova_compute[281613]:    <interface supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>passt</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </interface>
Nov 23 04:42:59 localhost nova_compute[281613]:    <panic supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>isa</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>hyperv</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </panic>
Nov 23 04:42:59 localhost nova_compute[281613]:    <console supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>null</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dev</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pipe</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stdio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>udp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tcp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu-vdagent</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </console>
Nov 23 04:42:59 localhost nova_compute[281613]:  </devices>
Nov 23 04:42:59 localhost nova_compute[281613]:  <features>
Nov 23 04:42:59 localhost nova_compute[281613]:    <gic supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <vmcoreinfo supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <genid supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backingStoreInput supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backup supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <async-teardown supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <ps2 supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sev supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sgx supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hyperv supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='features'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>relaxed</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vapic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>spinlocks</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vpindex</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>runtime</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>synic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stimer</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reset</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vendor_id</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>frequencies</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reenlightenment</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tlbflush</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ipi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>avic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emsr_bitmap</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>xmm_input</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:        <spinlocks>4095</spinlocks>
Nov 23 04:42:59 localhost nova_compute[281613]:        <stimer_direct>on</stimer_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:42:59 localhost nova_compute[281613]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:42:59 localhost nova_compute[281613]:      </defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hyperv>
Nov 23 04:42:59 localhost nova_compute[281613]:    <launchSecurity supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='sectype'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tdx</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </launchSecurity>
Nov 23 04:42:59 localhost nova_compute[281613]:  </features>
Nov 23 04:42:59 localhost nova_compute[281613]: </domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:58.996 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.002 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 23 04:42:59 localhost nova_compute[281613]: <domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:42:59 localhost nova_compute[281613]:  <domain>kvm</domain>
Nov 23 04:42:59 localhost nova_compute[281613]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 23 04:42:59 localhost nova_compute[281613]:  <arch>x86_64</arch>
Nov 23 04:42:59 localhost nova_compute[281613]:  <vcpu max='1024'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <iothreads supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <os supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='firmware'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>efi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:    </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    <loader supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>rom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pflash</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='readonly'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>yes</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='secure'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>yes</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </loader>
Nov 23 04:42:59 localhost nova_compute[281613]:  </os>
Nov 23 04:42:59 localhost nova_compute[281613]:  <cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='maximum' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='maximumMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-model' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <vendor>AMD</vendor>
Nov 23 04:42:59 localhost nova_compute[281613]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='x2apic'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='stibp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='succor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lbrv'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='custom' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Dhyana-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-128'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-256'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-512'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:  </cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:  <memoryBacking supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='sourceType'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>anonymous</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>memfd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:    </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:  </memoryBacking>
Nov 23 04:42:59 localhost nova_compute[281613]:  <devices>
Nov 23 04:42:59 localhost nova_compute[281613]:    <disk supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='diskDevice'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>disk</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cdrom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>floppy</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>lun</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>fdc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>sata</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </disk>
Nov 23 04:42:59 localhost nova_compute[281613]:    <graphics supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vnc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egl-headless</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </graphics>
Nov 23 04:42:59 localhost nova_compute[281613]:    <video supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='modelType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vga</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cirrus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>none</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>bochs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ramfb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </video>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hostdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='mode'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>subsystem</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='startupPolicy'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>mandatory</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>requisite</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>optional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='subsysType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pci</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='capsType'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='pciBackend'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hostdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <rng supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>random</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </rng>
Nov 23 04:42:59 localhost nova_compute[281613]:    <filesystem supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='driverType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>path</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>handle</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtiofs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </filesystem>
Nov 23 04:42:59 localhost nova_compute[281613]:    <tpm supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-tis</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-crb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emulator</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>external</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendVersion'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>2.0</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </tpm>
Nov 23 04:42:59 localhost nova_compute[281613]:    <redirdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </redirdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <channel supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </channel>
Nov 23 04:42:59 localhost nova_compute[281613]:    <crypto supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </crypto>
Nov 23 04:42:59 localhost nova_compute[281613]:    <interface supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>passt</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </interface>
Nov 23 04:42:59 localhost nova_compute[281613]:    <panic supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>isa</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>hyperv</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </panic>
Nov 23 04:42:59 localhost nova_compute[281613]:    <console supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>null</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dev</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pipe</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stdio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>udp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tcp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu-vdagent</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </console>
Nov 23 04:42:59 localhost nova_compute[281613]:  </devices>
Nov 23 04:42:59 localhost nova_compute[281613]:  <features>
Nov 23 04:42:59 localhost nova_compute[281613]:    <gic supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <vmcoreinfo supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <genid supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backingStoreInput supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backup supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <async-teardown supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <ps2 supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sev supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sgx supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hyperv supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='features'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>relaxed</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vapic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>spinlocks</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vpindex</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>runtime</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>synic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stimer</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reset</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vendor_id</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>frequencies</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reenlightenment</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tlbflush</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ipi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>avic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emsr_bitmap</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>xmm_input</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:        <spinlocks>4095</spinlocks>
Nov 23 04:42:59 localhost nova_compute[281613]:        <stimer_direct>on</stimer_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:42:59 localhost nova_compute[281613]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:42:59 localhost nova_compute[281613]:      </defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hyperv>
Nov 23 04:42:59 localhost nova_compute[281613]:    <launchSecurity supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='sectype'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tdx</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </launchSecurity>
Nov 23 04:42:59 localhost nova_compute[281613]:  </features>
Nov 23 04:42:59 localhost nova_compute[281613]: </domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.062 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 23 04:42:59 localhost nova_compute[281613]: <domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]:  <path>/usr/libexec/qemu-kvm</path>
Nov 23 04:42:59 localhost nova_compute[281613]:  <domain>kvm</domain>
Nov 23 04:42:59 localhost nova_compute[281613]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 23 04:42:59 localhost nova_compute[281613]:  <arch>x86_64</arch>
Nov 23 04:42:59 localhost nova_compute[281613]:  <vcpu max='240'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <iothreads supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:  <os supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='firmware'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <loader supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>rom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pflash</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='readonly'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>yes</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='secure'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>no</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </loader>
Nov 23 04:42:59 localhost nova_compute[281613]:  </os>
Nov 23 04:42:59 localhost nova_compute[281613]:  <cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-passthrough' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='hostPassthroughMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='maximum' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='maximumMigratable'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>on</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>off</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='host-model' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <vendor>AMD</vendor>
Nov 23 04:42:59 localhost nova_compute[281613]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='x2apic'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-deadline'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='hypervisor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc_adjust'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='spec-ctrl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='stibp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='cmp_legacy'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='overflow-recov'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='succor'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='amd-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='virt-ssbd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lbrv'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='tsc-scale'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='vmcb-clean'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pause-filter'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='pfthreshold'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='svme-addr-chk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <feature policy='disable' name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:    <mode name='custom' supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Broadwell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cascadelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Cooperlake-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Denverton-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Dhyana-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Genoa-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='auto-ibrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Milan-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amd-psfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='no-nested-data-bp'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='null-sel-clr-base'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='stibp-always-on'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-Rome-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='EPYC-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='GraniteRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-128'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-256'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx10-512'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='prefetchiti'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Haswell-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-noTSX'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v6'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Icelake-Server-v7'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='IvyBridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='KnightsMill-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4fmaps'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-4vnniw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512er'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512pf'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G4-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Opteron_G5-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fma4'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tbm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xop'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SapphireRapids-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='amx-tile'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-bf16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-fp16'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512-vpopcntdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bitalg'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vbmi2'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrc'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fzrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='la57'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='taa-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='tsx-ldtrk'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xfd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='SierraForest-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ifma'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-ne-convert'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx-vnni-int8'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='bus-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cmpccxadd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fbsdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='fsrs'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ibrs-all'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mcdt-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pbrsb-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='psdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='sbdr-ssdp-no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='serialize'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vaes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='vpclmulqdq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Client-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='hle'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='rtm'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Skylake-Server-v5'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512bw'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512cd'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512dq'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512f'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='avx512vl'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='invpcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pcid'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='pku'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='mpx'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v2'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v3'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='core-capability'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='split-lock-detect'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='Snowridge-v4'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='cldemote'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='erms'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='gfni'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdir64b'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='movdiri'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='xsaves'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='athlon-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='core2duo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='coreduo-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='n270-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='ss'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <blockers model='phenom-v1'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnow'/>
Nov 23 04:42:59 localhost nova_compute[281613]:        <feature name='3dnowext'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      </blockers>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 23 04:42:59 localhost nova_compute[281613]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 23 04:42:59 localhost nova_compute[281613]:    </mode>
Nov 23 04:42:59 localhost nova_compute[281613]:  </cpu>
Nov 23 04:42:59 localhost nova_compute[281613]:  <memoryBacking supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:    <enum name='sourceType'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>anonymous</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      <value>memfd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:    </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:  </memoryBacking>
Nov 23 04:42:59 localhost nova_compute[281613]:  <devices>
Nov 23 04:42:59 localhost nova_compute[281613]:    <disk supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='diskDevice'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>disk</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cdrom</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>floppy</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>lun</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ide</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>fdc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>sata</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </disk>
Nov 23 04:42:59 localhost nova_compute[281613]:    <graphics supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vnc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egl-headless</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </graphics>
Nov 23 04:42:59 localhost nova_compute[281613]:    <video supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='modelType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vga</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>cirrus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>none</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>bochs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ramfb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </video>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hostdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='mode'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>subsystem</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='startupPolicy'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>mandatory</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>requisite</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>optional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='subsysType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pci</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>scsi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='capsType'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='pciBackend'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hostdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <rng supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtio-non-transitional</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>random</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>egd</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </rng>
Nov 23 04:42:59 localhost nova_compute[281613]:    <filesystem supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='driverType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>path</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>handle</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>virtiofs</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </filesystem>
Nov 23 04:42:59 localhost nova_compute[281613]:    <tpm supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-tis</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tpm-crb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emulator</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>external</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendVersion'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>2.0</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </tpm>
Nov 23 04:42:59 localhost nova_compute[281613]:    <redirdev supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='bus'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>usb</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </redirdev>
Nov 23 04:42:59 localhost nova_compute[281613]:    <channel supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </channel>
Nov 23 04:42:59 localhost nova_compute[281613]:    <crypto supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'/>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendModel'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>builtin</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </crypto>
Nov 23 04:42:59 localhost nova_compute[281613]:    <interface supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='backendType'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>default</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>passt</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </interface>
Nov 23 04:42:59 localhost nova_compute[281613]:    <panic supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='model'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>isa</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>hyperv</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </panic>
Nov 23 04:42:59 localhost nova_compute[281613]:    <console supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='type'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>null</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vc</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pty</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dev</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>file</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>pipe</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stdio</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>udp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tcp</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>unix</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>qemu-vdagent</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>dbus</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </console>
Nov 23 04:42:59 localhost nova_compute[281613]:  </devices>
Nov 23 04:42:59 localhost nova_compute[281613]:  <features>
Nov 23 04:42:59 localhost nova_compute[281613]:    <gic supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <vmcoreinfo supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <genid supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backingStoreInput supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <backup supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <async-teardown supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <ps2 supported='yes'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sev supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <sgx supported='no'/>
Nov 23 04:42:59 localhost nova_compute[281613]:    <hyperv supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='features'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>relaxed</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vapic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>spinlocks</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vpindex</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>runtime</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>synic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>stimer</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reset</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>vendor_id</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>frequencies</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>reenlightenment</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tlbflush</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>ipi</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>avic</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>emsr_bitmap</value>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>xmm_input</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:      <defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:        <spinlocks>4095</spinlocks>
Nov 23 04:42:59 localhost nova_compute[281613]:        <stimer_direct>on</stimer_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_direct>off</tlbflush_direct>
Nov 23 04:42:59 localhost nova_compute[281613]:        <tlbflush_extended>off</tlbflush_extended>
Nov 23 04:42:59 localhost nova_compute[281613]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 23 04:42:59 localhost nova_compute[281613]:      </defaults>
Nov 23 04:42:59 localhost nova_compute[281613]:    </hyperv>
Nov 23 04:42:59 localhost nova_compute[281613]:    <launchSecurity supported='yes'>
Nov 23 04:42:59 localhost nova_compute[281613]:      <enum name='sectype'>
Nov 23 04:42:59 localhost nova_compute[281613]:        <value>tdx</value>
Nov 23 04:42:59 localhost nova_compute[281613]:      </enum>
Nov 23 04:42:59 localhost nova_compute[281613]:    </launchSecurity>
Nov 23 04:42:59 localhost nova_compute[281613]:  </features>
Nov 23 04:42:59 localhost nova_compute[281613]: </domainCapabilities>
Nov 23 04:42:59 localhost nova_compute[281613]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.118 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.118 281617 INFO nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Secure Boot support detected#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.120 281617 INFO nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.120 281617 INFO nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.133 281617 DEBUG nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.150 281617 INFO nova.virt.node [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Determined node identity 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from /var/lib/nova/compute_id#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.168 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Verified node 1df367d3-e79d-4d54-9b3c-f6af3beffa8b matches my host np0005532586.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.191 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.284 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.284 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.284 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.285 281617 DEBUG nova.compute.resource_tracker [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.285 281617 DEBUG oslo_concurrency.processutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay-83eafbe106b69bfb40e0c208e0c47fb0b57b27275c9dc4baec8e681d4d8d6fc7-merged.mount: Deactivated successfully.
Nov 23 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-227bac262497afebb7b0ea70cd4bf91fda7e3fad1a0376f9e59df1eb8a124bec-userdata-shm.mount: Deactivated successfully.
Nov 23 04:42:59 localhost systemd[1]: session-59.scope: Deactivated successfully.
Nov 23 04:42:59 localhost systemd[1]: session-59.scope: Consumed 1min 30.161s CPU time.
Nov 23 04:42:59 localhost systemd-logind[761]: Session 59 logged out. Waiting for processes to exit.
Nov 23 04:42:59 localhost systemd-logind[761]: Removed session 59.
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.683 281617 DEBUG oslo_concurrency.processutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.858 281617 WARNING nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.860 281617 DEBUG nova.compute.resource_tracker [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12504MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.860 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:42:59 localhost nova_compute[281613]: 2025-11-23 09:42:59.861 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.002 281617 DEBUG nova.compute.resource_tracker [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.003 281617 DEBUG nova.compute.resource_tracker [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.023 281617 DEBUG nova.scheduler.client.report [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.143 281617 DEBUG nova.scheduler.client.report [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.143 281617 DEBUG nova.compute.provider_tree [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.176 281617 DEBUG nova.scheduler.client.report [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.207 281617 DEBUG nova.scheduler.client.report [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.232 281617 DEBUG oslo_concurrency.processutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:43:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21465 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CBA590000000001030307) 
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.691 281617 DEBUG oslo_concurrency.processutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.696 281617 DEBUG nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 23 04:43:00 localhost nova_compute[281613]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.697 281617 INFO nova.virt.libvirt.host [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.698 281617 DEBUG nova.compute.provider_tree [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.699 281617 DEBUG nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.722 281617 DEBUG nova.scheduler.client.report [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.746 281617 DEBUG nova.compute.resource_tracker [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.747 281617 DEBUG oslo_concurrency.lockutils [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.747 281617 DEBUG nova.service [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.771 281617 DEBUG nova.service [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 23 04:43:00 localhost nova_compute[281613]: 2025-11-23 09:43:00.772 281617 DEBUG nova.servicegroup.drivers.db [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] DB_Driver: join new ServiceGroup member np0005532586.localdomain to the compute group, service = <Service: host=np0005532586.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 23 04:43:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:43:08 localhost podman[281906]: 2025-11-23 09:43:08.21961588 +0000 UTC m=+0.123530067 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:43:08 localhost podman[281906]: 2025-11-23 09:43:08.232068701 +0000 UTC m=+0.135982948 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:43:08 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:43:08 localhost sshd[281925]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:43:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21466 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759CDAD90000000001030307) 
Nov 23 04:43:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:09.247 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:43:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:09.248 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:43:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:09.248 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:43:09 localhost podman[281928]: 2025-11-23 09:43:09.886516128 +0000 UTC m=+0.074339159 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:43:09 localhost podman[281928]: 2025-11-23 09:43:09.89673593 +0000 UTC m=+0.084558911 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:43:09 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:43:09 localhost podman[281927]: 2025-11-23 09:43:09.945126347 +0000 UTC m=+0.135471184 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350)
Nov 23 04:43:09 localhost podman[281927]: 2025-11-23 09:43:09.963084815 +0000 UTC m=+0.153429702 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Nov 23 04:43:09 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:43:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:43:11 localhost podman[240144]: time="2025-11-23T09:43:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:43:11 localhost podman[240144]: @ - - [23/Nov/2025:09:43:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:43:11 localhost podman[240144]: @ - - [23/Nov/2025:09:43:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1"
Nov 23 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:43:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:43:18 localhost podman[281970]: 2025-11-23 09:43:18.144586294 +0000 UTC m=+0.050283788 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:43:18 localhost podman[281970]: 2025-11-23 09:43:18.148094847 +0000 UTC m=+0.053792331 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:43:18 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:43:18 localhost systemd[1]: tmp-crun.xvl93k.mount: Deactivated successfully.
Nov 23 04:43:18 localhost podman[281971]: 2025-11-23 09:43:18.206722987 +0000 UTC m=+0.109673558 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:43:18 localhost podman[281971]: 2025-11-23 09:43:18.291975765 +0000 UTC m=+0.194926336 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:43:18 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:43:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:43:21 localhost podman[282012]: 2025-11-23 09:43:21.168758052 +0000 UTC m=+0.074832811 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller)
Nov 23 04:43:21 localhost podman[282012]: 2025-11-23 09:43:21.235084427 +0000 UTC m=+0.141159156 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 23 04:43:21 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: ERROR   09:43:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: ERROR   09:43:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: ERROR   09:43:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: ERROR   09:43:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: ERROR   09:43:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:43:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54965 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D13B00000000001030307) 
Nov 23 04:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:43:24 localhost podman[282037]: 2025-11-23 09:43:24.18234407 +0000 UTC m=+0.087923869 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 23 04:43:24 localhost podman[282037]: 2025-11-23 09:43:24.197965446 +0000 UTC m=+0.103545285 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:43:24 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:43:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54966 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D17990000000001030307) 
Nov 23 04:43:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21467 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D1AD90000000001030307) 
Nov 23 04:43:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54967 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D1F990000000001030307) 
Nov 23 04:43:27 localhost podman[282204]: 
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.064182622 +0000 UTC m=+0.090766936 container create 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:43:27 localhost systemd[1]: Started libpod-conmon-4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe.scope.
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.020060278 +0000 UTC m=+0.046644632 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:43:27 localhost systemd[1]: Started libcrun container.
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.136571018 +0000 UTC m=+0.163155332 container init 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=)
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.143982365 +0000 UTC m=+0.170566669 container start 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=7, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7)
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.144238211 +0000 UTC m=+0.170822565 container attach 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True)
Nov 23 04:43:27 localhost gracious_germain[282219]: 167 167
Nov 23 04:43:27 localhost systemd[1]: libpod-4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe.scope: Deactivated successfully.
Nov 23 04:43:27 localhost podman[282204]: 2025-11-23 09:43:27.147389506 +0000 UTC m=+0.173973850 container died 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, architecture=x86_64, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, vendor=Red Hat, Inc., release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:43:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12143 DF PROTO=TCP SPT=37448 DPT=9102 SEQ=2547291997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D22D90000000001030307) 
Nov 23 04:43:27 localhost podman[282224]: 2025-11-23 09:43:27.226341166 +0000 UTC m=+0.071736640 container remove 4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_germain, distribution-scope=public, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph)
Nov 23 04:43:27 localhost systemd[1]: libpod-conmon-4eeffe42908d6bb45779f8639711bfa8885b90fd6f3623ce8d073498fd51cbbe.scope: Deactivated successfully.
Nov 23 04:43:27 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 04:43:27 localhost podman[282245]: 
Nov 23 04:43:27 localhost podman[282245]: 2025-11-23 09:43:27.454014863 +0000 UTC m=+0.076827204 container create 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:43:27 localhost systemd[1]: Started libpod-conmon-9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48.scope.
Nov 23 04:43:27 localhost systemd[1]: Started libcrun container.
Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00870bcb995e3a210206fa8ca17566f0deb71627e833440b7fd37694db0b032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00870bcb995e3a210206fa8ca17566f0deb71627e833440b7fd37694db0b032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00870bcb995e3a210206fa8ca17566f0deb71627e833440b7fd37694db0b032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 04:43:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00870bcb995e3a210206fa8ca17566f0deb71627e833440b7fd37694db0b032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:43:27 localhost podman[282245]: 2025-11-23 09:43:27.511050911 +0000 UTC m=+0.133863252 container init 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:43:27 localhost podman[282245]: 2025-11-23 09:43:27.520985865 +0000 UTC m=+0.143798226 container start 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, release=553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=)
Nov 23 04:43:27 localhost podman[282245]: 2025-11-23 09:43:27.521237752 +0000 UTC m=+0.144050113 container attach 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main)
Nov 23 04:43:27 localhost podman[282245]: 2025-11-23 09:43:27.421996042 +0000 UTC m=+0.044808453 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-8cadbeb86e9f406fd57e37dec73806ea30fe6fcdd71ddaf339d830efd9d0458f-merged.mount: Deactivated successfully.
Nov 23 04:43:28 localhost lucid_hermann[282260]: [
Nov 23 04:43:28 localhost lucid_hermann[282260]:    {
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "available": false,
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "ceph_device": false,
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "lsm_data": {},
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "lvs": [],
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "path": "/dev/sr0",
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "rejected_reasons": [
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "Has a FileSystem",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "Insufficient space (<5GB)"
Nov 23 04:43:28 localhost lucid_hermann[282260]:        ],
Nov 23 04:43:28 localhost lucid_hermann[282260]:        "sys_api": {
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "actuators": null,
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "device_nodes": "sr0",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "human_readable_size": "482.00 KB",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "id_bus": "ata",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "model": "QEMU DVD-ROM",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "nr_requests": "2",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "partitions": {},
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "path": "/dev/sr0",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "removable": "1",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "rev": "2.5+",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "ro": "0",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "rotational": "1",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "sas_address": "",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "sas_device_handle": "",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "scheduler_mode": "mq-deadline",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "sectors": 0,
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "sectorsize": "2048",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "size": 493568.0,
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "support_discard": "0",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "type": "disk",
Nov 23 04:43:28 localhost lucid_hermann[282260]:            "vendor": "QEMU"
Nov 23 04:43:28 localhost lucid_hermann[282260]:        }
Nov 23 04:43:28 localhost lucid_hermann[282260]:    }
Nov 23 04:43:28 localhost lucid_hermann[282260]: ]
Nov 23 04:43:28 localhost systemd[1]: libpod-9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48.scope: Deactivated successfully.
Nov 23 04:43:28 localhost podman[282245]: 2025-11-23 09:43:28.545690948 +0000 UTC m=+1.168503309 container died 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, name=rhceph, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True)
Nov 23 04:43:28 localhost systemd[1]: libpod-9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48.scope: Consumed 1.001s CPU time.
Nov 23 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-a00870bcb995e3a210206fa8ca17566f0deb71627e833440b7fd37694db0b032-merged.mount: Deactivated successfully.
Nov 23 04:43:28 localhost podman[284190]: 2025-11-23 09:43:28.625263995 +0000 UTC m=+0.073263120 container remove 9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_hermann, release=553, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, RELEASE=main)
Nov 23 04:43:28 localhost systemd[1]: libpod-conmon-9b8126445dcd345f3ec9d5d4aa270ccbd7f576d6a57b4f961c44646a80869b48.scope: Deactivated successfully.
Nov 23 04:43:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:30.216 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:43:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:30.218 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:43:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54968 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D2F590000000001030307) 
Nov 23 04:43:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54969 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D4EDA0000000001030307) 
Nov 23 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:43:39 localhost podman[284223]: 2025-11-23 09:43:39.195384685 +0000 UTC m=+0.091568167 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:43:39 localhost ovn_metadata_agent[159423]: 2025-11-23 09:43:39.221 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:43:39 localhost podman[284223]: 2025-11-23 09:43:39.236269573 +0000 UTC m=+0.132453015 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:43:39 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:43:40 localhost podman[284244]: 2025-11-23 09:43:40.194429895 +0000 UTC m=+0.090703163 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 04:43:40 localhost podman[284245]: 2025-11-23 09:43:40.243583103 +0000 UTC m=+0.137861939 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:43:40 localhost podman[284245]: 2025-11-23 09:43:40.251485393 +0000 UTC m=+0.145764179 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:43:40 localhost podman[284244]: 2025-11-23 09:43:40.264026727 +0000 UTC m=+0.160299915 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7)
Nov 23 04:43:40 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:43:40 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:43:41 localhost podman[240144]: time="2025-11-23T09:43:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:43:41 localhost podman[240144]: @ - - [23/Nov/2025:09:43:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:43:41 localhost podman[240144]: @ - - [23/Nov/2025:09:43:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17703 "" "Go-http-client/1.1"
Nov 23 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:43:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:43:49 localhost podman[284286]: 2025-11-23 09:43:49.194983827 +0000 UTC m=+0.098671646 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:43:49 localhost podman[284286]: 2025-11-23 09:43:49.204006116 +0000 UTC m=+0.107693925 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:43:49 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:43:49 localhost podman[284287]: 2025-11-23 09:43:49.292764088 +0000 UTC m=+0.193318514 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:43:49 localhost podman[284287]: 2025-11-23 09:43:49.305023434 +0000 UTC m=+0.205577870 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:43:49 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:43:50 localhost nova_compute[281613]: 2025-11-23 09:43:50.774 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:50 localhost nova_compute[281613]: 2025-11-23 09:43:50.793 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:43:52 localhost systemd[1]: tmp-crun.5EvoIP.mount: Deactivated successfully.
Nov 23 04:43:52 localhost podman[284326]: 2025-11-23 09:43:52.185730194 +0000 UTC m=+0.088159396 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:43:52 localhost podman[284326]: 2025-11-23 09:43:52.229295824 +0000 UTC m=+0.131725066 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:43:52 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: ERROR   09:43:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: ERROR   09:43:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: ERROR   09:43:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: ERROR   09:43:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: ERROR   09:43:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:43:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25709 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D88E00000000001030307) 
Nov 23 04:43:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25710 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D8CD90000000001030307) 
Nov 23 04:43:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54970 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D8EDA0000000001030307) 
Nov 23 04:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:43:55 localhost podman[284350]: 2025-11-23 09:43:55.17176747 +0000 UTC m=+0.081543060 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 04:43:55 localhost podman[284350]: 2025-11-23 09:43:55.185922066 +0000 UTC m=+0.095697716 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 04:43:55 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:43:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25711 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D94D90000000001030307) 
Nov 23 04:43:56 localhost sshd[284368]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:43:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21468 DF PROTO=TCP SPT=56942 DPT=9102 SEQ=228427388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759D98D90000000001030307) 
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.021 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.021 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.022 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.022 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.037 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.039 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.039 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.040 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.040 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.040 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.040 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.055 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.056 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.056 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.056 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.057 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.493 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.723 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.725 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12499MB free_disk=41.83708190917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.726 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.726 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.795 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.795 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:43:58 localhost nova_compute[281613]: 2025-11-23 09:43:58.832 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:43:59 localhost nova_compute[281613]: 2025-11-23 09:43:59.298 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:43:59 localhost nova_compute[281613]: 2025-11-23 09:43:59.305 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:43:59 localhost nova_compute[281613]: 2025-11-23 09:43:59.326 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:43:59 localhost nova_compute[281613]: 2025-11-23 09:43:59.329 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:43:59 localhost nova_compute[281613]: 2025-11-23 09:43:59.329 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:43:59 localhost ovn_controller[153786]: 2025-11-23T09:43:59Z|00038|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 23 04:44:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25712 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759DA49A0000000001030307) 
Nov 23 04:44:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25713 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759DC4D90000000001030307) 
Nov 23 04:44:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:09.248 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:44:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:09.248 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:44:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:09.249 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:44:10 localhost podman[284414]: 2025-11-23 09:44:10.164009412 +0000 UTC m=+0.070824100 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 04:44:10 localhost podman[284414]: 2025-11-23 09:44:10.203082223 +0000 UTC m=+0.109896861 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 23 04:44:10 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:44:11 localhost systemd[1]: tmp-crun.rRqs47.mount: Deactivated successfully.
Nov 23 04:44:11 localhost podman[284433]: 2025-11-23 09:44:11.185836214 +0000 UTC m=+0.092202550 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:44:11 localhost podman[284433]: 2025-11-23 09:44:11.22546495 +0000 UTC m=+0.131831296 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:44:11 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:44:11 localhost podman[284434]: 2025-11-23 09:44:11.227295344 +0000 UTC m=+0.130432476 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:44:11 localhost podman[240144]: time="2025-11-23T09:44:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:44:11 localhost podman[240144]: @ - - [23/Nov/2025:09:44:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:44:11 localhost podman[284434]: 2025-11-23 09:44:11.310916473 +0000 UTC m=+0.214053605 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:44:11 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:44:11 localhost podman[240144]: @ - - [23/Nov/2025:09:44:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17706 "" "Go-http-client/1.1"
Nov 23 04:44:12 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:e7:d2:09 MACPROTO=0800 SRC=89.248.163.200 DST=38.102.83.162 LEN=40 TOS=0x08 PREC=0x40 TTL=245 ID=4972 PROTO=TCP SPT=48651 DPT=9090 SEQ=2287821644 ACK=0 WINDOW=1024 RES=0x00 SYN URGP=0 
Nov 23 04:44:13 localhost sshd[284476]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:44:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:44:20 localhost podman[284479]: 2025-11-23 09:44:20.233433456 +0000 UTC m=+0.082648923 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:44:20 localhost podman[284479]: 2025-11-23 09:44:20.246136143 +0000 UTC m=+0.095351590 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:44:20 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:44:20 localhost podman[284478]: 2025-11-23 09:44:20.293141204 +0000 UTC m=+0.143849195 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:44:20 localhost podman[284478]: 2025-11-23 09:44:20.301189127 +0000 UTC m=+0.151897138 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:44:20 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: ERROR   09:44:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: ERROR   09:44:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: ERROR   09:44:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: ERROR   09:44:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: ERROR   09:44:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:44:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:44:23 localhost podman[284518]: 2025-11-23 09:44:23.168701581 +0000 UTC m=+0.075381913 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:44:23 localhost podman[284518]: 2025-11-23 09:44:23.249283793 +0000 UTC m=+0.155964115 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 04:44:23 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:44:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44690 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759DFE100000000001030307) 
Nov 23 04:44:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44691 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E02190000000001030307) 
Nov 23 04:44:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25714 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E04D90000000001030307) 
Nov 23 04:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:44:26 localhost systemd[1]: tmp-crun.jhdP5H.mount: Deactivated successfully.
Nov 23 04:44:26 localhost podman[284544]: 2025-11-23 09:44:26.182817709 +0000 UTC m=+0.091294062 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:44:26 localhost podman[284544]: 2025-11-23 09:44:26.192985704 +0000 UTC m=+0.101462077 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=multipathd)
Nov 23 04:44:26 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:44:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44692 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E0A190000000001030307) 
Nov 23 04:44:26 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:26.716 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:44:26 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:26.717 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:44:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54971 DF PROTO=TCP SPT=43174 DPT=9102 SEQ=4002211221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E0CD90000000001030307) 
Nov 23 04:44:28 localhost ovn_metadata_agent[159423]: 2025-11-23 09:44:28.718 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:44:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44693 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E19D90000000001030307) 
Nov 23 04:44:32 localhost ovn_controller[153786]: 2025-11-23T09:44:32Z|00039|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 23 04:44:37 localhost sshd[284648]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:44:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44694 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E3AD90000000001030307) 
Nov 23 04:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:44:41 localhost podman[284651]: 2025-11-23 09:44:41.192258693 +0000 UTC m=+0.089843871 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:44:41 localhost podman[284651]: 2025-11-23 09:44:41.208002689 +0000 UTC m=+0.105587857 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:44:41 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:44:41 localhost podman[240144]: time="2025-11-23T09:44:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:44:41 localhost podman[240144]: @ - - [23/Nov/2025:09:44:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:44:41 localhost podman[240144]: @ - - [23/Nov/2025:09:44:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1"
Nov 23 04:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:44:42 localhost podman[284671]: 2025-11-23 09:44:42.177230568 +0000 UTC m=+0.083578030 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 23 04:44:42 localhost podman[284671]: 2025-11-23 09:44:42.215631468 +0000 UTC m=+0.121978900 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal)
Nov 23 04:44:42 localhost podman[284672]: 2025-11-23 09:44:42.228426529 +0000 UTC m=+0.130952561 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:44:42 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:44:42 localhost podman[284672]: 2025-11-23 09:44:42.23881015 +0000 UTC m=+0.141336152 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:44:42 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:44:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:44:51 localhost systemd[1]: tmp-crun.CtyfSx.mount: Deactivated successfully.
Nov 23 04:44:51 localhost podman[284715]: 2025-11-23 09:44:51.187216162 +0000 UTC m=+0.092030434 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 04:44:51 localhost podman[284715]: 2025-11-23 09:44:51.196980354 +0000 UTC m=+0.101794696 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 04:44:51 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:44:51 localhost podman[284716]: 2025-11-23 09:44:51.279543704 +0000 UTC m=+0.176456267 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:44:51 localhost podman[284716]: 2025-11-23 09:44:51.316052061 +0000 UTC m=+0.212964644 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:44:51 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: ERROR   09:44:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: ERROR   09:44:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: ERROR   09:44:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: ERROR   09:44:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: ERROR   09:44:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:44:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:44:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4486 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E73400000000001030307) 
Nov 23 04:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:44:54 localhost podman[284758]: 2025-11-23 09:44:54.17473747 +0000 UTC m=+0.082288442 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:44:54 localhost podman[284758]: 2025-11-23 09:44:54.214975455 +0000 UTC m=+0.122526457 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:44:54 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:44:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4487 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E775A0000000001030307) 
Nov 23 04:44:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44695 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E7AD90000000001030307) 
Nov 23 04:44:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4488 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E7F590000000001030307) 
Nov 23 04:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:44:57 localhost podman[284783]: 2025-11-23 09:44:57.170449535 +0000 UTC m=+0.077230497 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 23 04:44:57 localhost podman[284783]: 2025-11-23 09:44:57.185890271 +0000 UTC m=+0.092671213 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:44:57 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:44:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25715 DF PROTO=TCP SPT=59264 DPT=9102 SEQ=400452088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E82D90000000001030307) 
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.322 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.361 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.361 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.362 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.378 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.378 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.378 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.379 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.379 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.380 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.380 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.406 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.406 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.406 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.407 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.407 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:44:59 localhost nova_compute[281613]: 2025-11-23 09:44:59.862 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.077 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.079 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12497MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.079 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.080 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.172 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.173 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.219 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:45:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4489 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759E8F190000000001030307) 
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.633 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.639 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.656 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.657 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:45:00 localhost nova_compute[281613]: 2025-11-23 09:45:00.658 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:45:01 localhost nova_compute[281613]: 2025-11-23 09:45:01.296 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:01 localhost nova_compute[281613]: 2025-11-23 09:45:01.297 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:01 localhost nova_compute[281613]: 2025-11-23 09:45:01.297 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4490 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EAED90000000001030307) 
Nov 23 04:45:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:45:09.249 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:45:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:45:09.250 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:45:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:45:09.250 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:45:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:45:11 localhost podman[240144]: time="2025-11-23T09:45:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:45:11 localhost podman[240144]: @ - - [23/Nov/2025:09:45:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:45:11 localhost podman[240144]: @ - - [23/Nov/2025:09:45:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17711 "" "Go-http-client/1.1"
Nov 23 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:45:12 localhost podman[284846]: 2025-11-23 09:45:12.173068173 +0000 UTC m=+0.080380838 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true)
Nov 23 04:45:12 localhost podman[284846]: 2025-11-23 09:45:12.185922215 +0000 UTC m=+0.093234870 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:45:12 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:45:13 localhost podman[284867]: 2025-11-23 09:45:13.175564644 +0000 UTC m=+0.079273585 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:45:13 localhost podman[284867]: 2025-11-23 09:45:13.182890037 +0000 UTC m=+0.086599028 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:45:13 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:45:13 localhost podman[284866]: 2025-11-23 09:45:13.226433876 +0000 UTC m=+0.132522656 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 23 04:45:13 localhost podman[284866]: 2025-11-23 09:45:13.267114314 +0000 UTC m=+0.173203054 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public)
Nov 23 04:45:13 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:45:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:45:22 localhost podman[284909]: 2025-11-23 09:45:22.175621392 +0000 UTC m=+0.080907993 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:45:22 localhost podman[284909]: 2025-11-23 09:45:22.210075789 +0000 UTC m=+0.115362480 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:45:22 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:45:22 localhost podman[284910]: 2025-11-23 09:45:22.228248655 +0000 UTC m=+0.131361843 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:45:22 localhost podman[284910]: 2025-11-23 09:45:22.239912082 +0000 UTC m=+0.143025300 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:45:22 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: ERROR   09:45:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: ERROR   09:45:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: ERROR   09:45:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: ERROR   09:45:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: ERROR   09:45:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:45:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:45:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59914 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EE86F0000000001030307) 
Nov 23 04:45:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59915 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EEC590000000001030307) 
Nov 23 04:45:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4491 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EEEDA0000000001030307) 
Nov 23 04:45:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:45:25 localhost podman[284950]: 2025-11-23 09:45:25.168267037 +0000 UTC m=+0.075528597 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:45:25 localhost podman[284950]: 2025-11-23 09:45:25.255128681 +0000 UTC m=+0.162390261 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:45:25 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:45:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59916 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EF4590000000001030307) 
Nov 23 04:45:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44696 DF PROTO=TCP SPT=53934 DPT=9102 SEQ=651930350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759EF8D90000000001030307) 
Nov 23 04:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:45:28 localhost podman[284976]: 2025-11-23 09:45:28.208788638 +0000 UTC m=+0.113832365 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 04:45:28 localhost podman[284976]: 2025-11-23 09:45:28.222963938 +0000 UTC m=+0.128007665 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 23 04:45:28 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:45:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59917 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F04190000000001030307) 
Nov 23 04:45:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59918 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F24D90000000001030307) 
Nov 23 04:45:40 localhost sshd[285081]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:45:41 localhost podman[240144]: time="2025-11-23T09:45:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:45:41 localhost podman[240144]: @ - - [23/Nov/2025:09:45:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:45:41 localhost podman[240144]: @ - - [23/Nov/2025:09:45:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17710 "" "Go-http-client/1.1"
Nov 23 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:45:43 localhost podman[285083]: 2025-11-23 09:45:43.184012985 +0000 UTC m=+0.086582297 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:45:43 localhost podman[285083]: 2025-11-23 09:45:43.195785166 +0000 UTC m=+0.098354468 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:45:43 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:45:43 localhost podman[285101]: 2025-11-23 09:45:43.319476325 +0000 UTC m=+0.092262201 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:45:43 localhost podman[285101]: 2025-11-23 09:45:43.328077174 +0000 UTC m=+0.100863030 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:45:43 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:45:43 localhost podman[285121]: 2025-11-23 09:45:43.413698932 +0000 UTC m=+0.086335520 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41)
Nov 23 04:45:43 localhost podman[285121]: 2025-11-23 09:45:43.429927202 +0000 UTC m=+0.102563770 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter)
Nov 23 04:45:43 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:45:50 localhost sshd[285146]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:45:50 localhost systemd-logind[761]: New session 61 of user zuul.
Nov 23 04:45:50 localhost systemd[1]: Started Session 61 of User zuul.
Nov 23 04:45:50 localhost python3[285168]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:45:51 localhost subscription-manager[285169]: Unregistered machine with identity: 1dfa576e-3fe2-4912-9ff8-232d091abdc0
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: ERROR   09:45:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: ERROR   09:45:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: ERROR   09:45:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: ERROR   09:45:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: ERROR   09:45:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:45:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:45:53 localhost podman[285171]: 2025-11-23 09:45:53.182578108 +0000 UTC m=+0.089189372 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:45:53 localhost podman[285171]: 2025-11-23 09:45:53.216902821 +0000 UTC m=+0.123514085 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:45:53 localhost podman[285172]: 2025-11-23 09:45:53.230709021 +0000 UTC m=+0.134998358 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:45:53 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:45:53 localhost podman[285172]: 2025-11-23 09:45:53.238314251 +0000 UTC m=+0.142603608 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:45:53 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:45:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9014 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F5DA00000000001030307) 
Nov 23 04:45:54 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9015 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F61990000000001030307) 
Nov 23 04:45:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59919 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F64D90000000001030307) 
Nov 23 04:45:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:45:56 localhost podman[285212]: 2025-11-23 09:45:56.175055949 +0000 UTC m=+0.081805938 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 04:45:56 localhost podman[285212]: 2025-11-23 09:45:56.212163404 +0000 UTC m=+0.118913443 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:45:56 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:45:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9016 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F699A0000000001030307) 
Nov 23 04:45:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4492 DF PROTO=TCP SPT=40260 DPT=9102 SEQ=635526874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F6CD90000000001030307) 
Nov 23 04:45:58 localhost nova_compute[281613]: 2025-11-23 09:45:58.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:58 localhost nova_compute[281613]: 2025-11-23 09:45:58.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.034 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.062 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.063 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.063 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.064 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.064 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:45:59 localhost systemd[1]: tmp-crun.uCg653.mount: Deactivated successfully.
Nov 23 04:45:59 localhost podman[285236]: 2025-11-23 09:45:59.18376377 +0000 UTC m=+0.088002098 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:45:59 localhost podman[285236]: 2025-11-23 09:45:59.19795049 +0000 UTC m=+0.102188808 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:45:59 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.498 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.705 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.707 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12496MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.707 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.708 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.808 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.809 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:45:59 localhost nova_compute[281613]: 2025-11-23 09:45:59.834 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:46:00 localhost nova_compute[281613]: 2025-11-23 09:46:00.288 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:46:00 localhost nova_compute[281613]: 2025-11-23 09:46:00.295 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:46:00 localhost nova_compute[281613]: 2025-11-23 09:46:00.317 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:46:00 localhost nova_compute[281613]: 2025-11-23 09:46:00.319 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:46:00 localhost nova_compute[281613]: 2025-11-23 09:46:00.320 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:46:00 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9017 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F79590000000001030307) 
Nov 23 04:46:01 localhost nova_compute[281613]: 2025-11-23 09:46:01.304 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:01 localhost nova_compute[281613]: 2025-11-23 09:46:01.305 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:01 localhost nova_compute[281613]: 2025-11-23 09:46:01.305 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:01 localhost nova_compute[281613]: 2025-11-23 09:46:01.306 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:01 localhost nova_compute[281613]: 2025-11-23 09:46:01.306 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:02 localhost nova_compute[281613]: 2025-11-23 09:46:02.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:04 localhost sshd[285299]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:46:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9018 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759F98DA0000000001030307) 
Nov 23 04:46:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:46:09.250 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:46:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:46:09.250 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:46:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:46:09.251 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:46:11 localhost podman[240144]: time="2025-11-23T09:46:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:46:11 localhost podman[240144]: @ - - [23/Nov/2025:09:46:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:46:11 localhost podman[240144]: @ - - [23/Nov/2025:09:46:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1"
Nov 23 04:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:46:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:46:14 localhost podman[285301]: 2025-11-23 09:46:14.178832992 +0000 UTC m=+0.084752773 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Nov 23 04:46:14 localhost podman[285301]: 2025-11-23 09:46:14.187084371 +0000 UTC m=+0.093004152 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6)
Nov 23 04:46:14 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:46:14 localhost podman[285302]: 2025-11-23 09:46:14.233155184 +0000 UTC m=+0.135563054 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Nov 23 04:46:14 localhost podman[285302]: 2025-11-23 09:46:14.247949562 +0000 UTC m=+0.150357462 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:46:14 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:46:14 localhost podman[285303]: 2025-11-23 09:46:14.347229445 +0000 UTC m=+0.246416412 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:46:14 localhost podman[285303]: 2025-11-23 09:46:14.352746615 +0000 UTC m=+0.251933582 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:46:14 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:46:15 localhost systemd[1]: tmp-crun.FXx7jB.mount: Deactivated successfully.
Nov 23 04:46:20 localhost sshd[285440]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: ERROR   09:46:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: ERROR   09:46:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: ERROR   09:46:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: ERROR   09:46:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: ERROR   09:46:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:46:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:46:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56307 DF PROTO=TCP SPT=55160 DPT=9102 SEQ=2166924994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FD2CF0000000001030307) 
Nov 23 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:46:24 localhost podman[285479]: 2025-11-23 09:46:24.18237602 +0000 UTC m=+0.083297790 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:46:24 localhost podman[285479]: 2025-11-23 09:46:24.18997214 +0000 UTC m=+0.090893930 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:46:24 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:46:24 localhost podman[285478]: 2025-11-23 09:46:24.278943005 +0000 UTC m=+0.182975141 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 23 04:46:24 localhost podman[285478]: 2025-11-23 09:46:24.288991463 +0000 UTC m=+0.193023589 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:46:24 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:46:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56308 DF PROTO=TCP SPT=55160 DPT=9102 SEQ=2166924994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FD6D90000000001030307) 
Nov 23 04:46:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9019 DF PROTO=TCP SPT=49946 DPT=9102 SEQ=2341760043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FD8D90000000001030307) 
Nov 23 04:46:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56309 DF PROTO=TCP SPT=55160 DPT=9102 SEQ=2166924994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FDED90000000001030307) 
Nov 23 04:46:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:46:27 localhost podman[285518]: 2025-11-23 09:46:27.172723426 +0000 UTC m=+0.080318477 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 04:46:27 localhost podman[285518]: 2025-11-23 09:46:27.238117011 +0000 UTC m=+0.145711992 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Nov 23 04:46:27 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:46:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59920 DF PROTO=TCP SPT=49272 DPT=9102 SEQ=1979526834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FE2DA0000000001030307) 
Nov 23 04:46:28 localhost sshd[285543]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:46:28 localhost systemd-logind[761]: New session 62 of user tripleo-admin.
Nov 23 04:46:28 localhost systemd[1]: Created slice User Slice of UID 1003.
Nov 23 04:46:28 localhost systemd[1]: Starting User Runtime Directory /run/user/1003...
Nov 23 04:46:28 localhost systemd[1]: Finished User Runtime Directory /run/user/1003.
Nov 23 04:46:28 localhost systemd[1]: Starting User Manager for UID 1003...
Nov 23 04:46:28 localhost systemd[285547]: Queued start job for default target Main User Target.
Nov 23 04:46:28 localhost systemd[285547]: Created slice User Application Slice.
Nov 23 04:46:28 localhost systemd[285547]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 04:46:28 localhost systemd[285547]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 04:46:28 localhost systemd[285547]: Reached target Paths.
Nov 23 04:46:28 localhost systemd[285547]: Reached target Timers.
Nov 23 04:46:28 localhost systemd[285547]: Starting D-Bus User Message Bus Socket...
Nov 23 04:46:28 localhost systemd[285547]: Starting Create User's Volatile Files and Directories...
Nov 23 04:46:28 localhost systemd[285547]: Listening on D-Bus User Message Bus Socket.
Nov 23 04:46:28 localhost systemd[285547]: Reached target Sockets.
Nov 23 04:46:28 localhost systemd[285547]: Finished Create User's Volatile Files and Directories.
Nov 23 04:46:28 localhost systemd[285547]: Reached target Basic System.
Nov 23 04:46:28 localhost systemd[285547]: Reached target Main User Target.
Nov 23 04:46:28 localhost systemd[285547]: Startup finished in 144ms.
Nov 23 04:46:28 localhost systemd[1]: Started User Manager for UID 1003.
Nov 23 04:46:28 localhost systemd[1]: Started Session 62 of User tripleo-admin.
Nov 23 04:46:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:46:29 localhost podman[285691]: 2025-11-23 09:46:29.385344216 +0000 UTC m=+0.090518609 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:46:29 localhost podman[285691]: 2025-11-23 09:46:29.401003739 +0000 UTC m=+0.106178132 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:46:29 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:46:29 localhost python3[285690]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:46:29 localhost systemd-journald[47537]: Field hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation.
Nov 23 04:46:29 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 04:46:29 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:46:29 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 04:46:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:60:fc:17 MACDST=fa:16:3e:e4:75:25 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56310 DF PROTO=TCP SPT=55160 DPT=9102 SEQ=2166924994 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A759FEE9A0000000001030307) 
Nov 23 04:46:30 localhost python3[285855]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 23 04:46:30 localhost systemd[1]: Stopping Netfilter Tables...
Nov 23 04:46:30 localhost systemd[1]: nftables.service: Deactivated successfully.
Nov 23 04:46:30 localhost systemd[1]: Stopped Netfilter Tables.
Nov 23 04:46:30 localhost systemd[1]: Starting Netfilter Tables...
Nov 23 04:46:30 localhost systemd[1]: Finished Netfilter Tables.
Nov 23 04:46:41 localhost podman[240144]: time="2025-11-23T09:46:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:46:41 localhost podman[240144]: @ - - [23/Nov/2025:09:46:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149987 "" "Go-http-client/1.1"
Nov 23 04:46:41 localhost podman[240144]: @ - - [23/Nov/2025:09:46:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17704 "" "Go-http-client/1.1"
Nov 23 04:46:41 localhost podman[286135]: 
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.751245367 +0000 UTC m=+0.077473749 container create 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, com.redhat.component=rhceph-container, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553)
Nov 23 04:46:41 localhost systemd[1]: Started libpod-conmon-63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2.scope.
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.720515599 +0000 UTC m=+0.046744031 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:46:41 localhost systemd[1]: Started libcrun container.
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.839058901 +0000 UTC m=+0.165287283 container init 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.33.12, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main)
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.852267126 +0000 UTC m=+0.178495508 container start 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, io.openshift.expose-services=, name=rhceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git)
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.852913713 +0000 UTC m=+0.179142095 container attach 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, architecture=x86_64)
Nov 23 04:46:41 localhost systemd[1]: libpod-63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2.scope: Deactivated successfully.
Nov 23 04:46:41 localhost trusting_brown[286148]: 167 167
Nov 23 04:46:41 localhost podman[286135]: 2025-11-23 09:46:41.858702643 +0000 UTC m=+0.184931075 container died 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, RELEASE=main)
Nov 23 04:46:41 localhost podman[286153]: 2025-11-23 09:46:41.963038382 +0000 UTC m=+0.093816940 container remove 63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_brown, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=)
Nov 23 04:46:41 localhost systemd[1]: libpod-conmon-63958096087eadc209bcdfeb5c4c7015e6ae0f49cd5d35391670fbf9a6c987e2.scope: Deactivated successfully.
Nov 23 04:46:42 localhost systemd[1]: Reloading.
Nov 23 04:46:42 localhost systemd-sysv-generator[286198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:46:42 localhost systemd-rc-local-generator[286191]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: var-lib-containers-storage-overlay-389867cb7b1161689ce248abdbad30771262ec566bac558a2759d16f69cf6d51-merged.mount: Deactivated successfully.
Nov 23 04:46:42 localhost systemd[1]: Reloading.
Nov 23 04:46:42 localhost systemd-rc-local-generator[286234]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:46:42 localhost systemd-sysv-generator[286239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:46:42 localhost systemd[1]: Starting Ceph mds.mds.np0005532586.mfohsb for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 04:46:43 localhost podman[286301]: 
Nov 23 04:46:43 localhost podman[286301]: 2025-11-23 09:46:43.101124034 +0000 UTC m=+0.076895423 container create 8c97f878334e2a3d99aca9af537b137627c4ca056dd88396466e14b59ab3cd70 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532586-mfohsb, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Nov 23 04:46:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85d07d5564b62904f4c28b3fc137d2e1c3e385eff5c094c19dfc7a0448d27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 04:46:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85d07d5564b62904f4c28b3fc137d2e1c3e385eff5c094c19dfc7a0448d27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:46:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85d07d5564b62904f4c28b3fc137d2e1c3e385eff5c094c19dfc7a0448d27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:46:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85d07d5564b62904f4c28b3fc137d2e1c3e385eff5c094c19dfc7a0448d27/merged/var/lib/ceph/mds/ceph-mds.np0005532586.mfohsb supports timestamps until 2038 (0x7fffffff)
Nov 23 04:46:43 localhost podman[286301]: 2025-11-23 09:46:43.169011949 +0000 UTC m=+0.144783388 container init 8c97f878334e2a3d99aca9af537b137627c4ca056dd88396466e14b59ab3cd70 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532586-mfohsb, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:46:43 localhost podman[286301]: 2025-11-23 09:46:43.069773079 +0000 UTC m=+0.045544498 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:46:43 localhost podman[286301]: 2025-11-23 09:46:43.178328586 +0000 UTC m=+0.154099975 container start 8c97f878334e2a3d99aca9af537b137627c4ca056dd88396466e14b59ab3cd70 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532586-mfohsb, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container)
Nov 23 04:46:43 localhost bash[286301]: 8c97f878334e2a3d99aca9af537b137627c4ca056dd88396466e14b59ab3cd70
Nov 23 04:46:43 localhost systemd[1]: Started Ceph mds.mds.np0005532586.mfohsb for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 04:46:43 localhost ceph-mds[286319]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 04:46:43 localhost ceph-mds[286319]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2
Nov 23 04:46:43 localhost ceph-mds[286319]: main not setting numa affinity
Nov 23 04:46:43 localhost ceph-mds[286319]: pidfile_write: ignore empty --pid-file
Nov 23 04:46:43 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532586-mfohsb[286315]: starting mds.mds.np0005532586.mfohsb at 
Nov 23 04:46:43 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 6 from mon.1
Nov 23 04:46:43 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 7 from mon.1
Nov 23 04:46:43 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Monitors have assigned me to become a standby.
Nov 23 04:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:46:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:46:45 localhost podman[286340]: 2025-11-23 09:46:45.184429995 +0000 UTC m=+0.087303670 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 23 04:46:45 localhost systemd[1]: tmp-crun.Kj7fUI.mount: Deactivated successfully.
Nov 23 04:46:45 localhost podman[286340]: 2025-11-23 09:46:45.232236565 +0000 UTC m=+0.135110290 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:46:45 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:46:45 localhost podman[286339]: 2025-11-23 09:46:45.236703698 +0000 UTC m=+0.139879851 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Nov 23 04:46:45 localhost systemd[1]: tmp-crun.Gb0Eh4.mount: Deactivated successfully.
Nov 23 04:46:45 localhost podman[286341]: 2025-11-23 09:46:45.306040772 +0000 UTC m=+0.205570005 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:46:45 localhost podman[286341]: 2025-11-23 09:46:45.316581473 +0000 UTC m=+0.216110696 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:46:45 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:46:45 localhost podman[286339]: 2025-11-23 09:46:45.368142646 +0000 UTC m=+0.271318839 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:46:45 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:46:49 localhost podman[286527]: 2025-11-23 09:46:49.431920799 +0000 UTC m=+0.101575544 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:46:49 localhost podman[286527]: 2025-11-23 09:46:49.53808691 +0000 UTC m=+0.207741705 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, release=553, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:46:51 localhost systemd[1]: session-61.scope: Deactivated successfully.
Nov 23 04:46:51 localhost systemd-logind[761]: Session 61 logged out. Waiting for processes to exit.
Nov 23 04:46:51 localhost systemd-logind[761]: Removed session 61.
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: ERROR   09:46:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: ERROR   09:46:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: ERROR   09:46:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: ERROR   09:46:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: ERROR   09:46:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:46:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:46:55 localhost systemd[1]: tmp-crun.wjDxMZ.mount: Deactivated successfully.
Nov 23 04:46:55 localhost podman[286645]: 2025-11-23 09:46:55.19161013 +0000 UTC m=+0.094093667 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:46:55 localhost podman[286646]: 2025-11-23 09:46:55.230524495 +0000 UTC m=+0.131455260 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:46:55 localhost podman[286646]: 2025-11-23 09:46:55.243979166 +0000 UTC m=+0.144909941 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:46:55 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:46:55 localhost podman[286645]: 2025-11-23 09:46:55.30101783 +0000 UTC m=+0.203501387 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 04:46:55 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:46:58 localhost nova_compute[281613]: 2025-11-23 09:46:58.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:46:58 localhost podman[286685]: 2025-11-23 09:46:58.174868442 +0000 UTC m=+0.079907786 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:46:58 localhost podman[286685]: 2025-11-23 09:46:58.240436372 +0000 UTC m=+0.145475706 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 23 04:46:58 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.032 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.033 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.034 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.059 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.060 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.060 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.061 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.061 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.508 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.659 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.660 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12463MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.661 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.661 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.724 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.724 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:46:59 localhost nova_compute[281613]: 2025-11-23 09:46:59.746 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:47:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:47:00 localhost podman[286753]: 2025-11-23 09:47:00.17292032 +0000 UTC m=+0.083809323 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:47:00 localhost nova_compute[281613]: 2025-11-23 09:47:00.199 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:47:00 localhost nova_compute[281613]: 2025-11-23 09:47:00.206 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:47:00 localhost podman[286753]: 2025-11-23 09:47:00.209918442 +0000 UTC m=+0.120807425 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:47:00 localhost nova_compute[281613]: 2025-11-23 09:47:00.220 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:47:00 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:47:00 localhost nova_compute[281613]: 2025-11-23 09:47:00.223 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:47:00 localhost nova_compute[281613]: 2025-11-23 09:47:00.223 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:47:01 localhost nova_compute[281613]: 2025-11-23 09:47:01.210 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:01 localhost nova_compute[281613]: 2025-11-23 09:47:01.210 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:01 localhost sshd[286776]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:47:02 localhost nova_compute[281613]: 2025-11-23 09:47:02.016 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:02 localhost nova_compute[281613]: 2025-11-23 09:47:02.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:02 localhost nova_compute[281613]: 2025-11-23 09:47:02.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:02 localhost sshd[286777]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:47:03 localhost nova_compute[281613]: 2025-11-23 09:47:03.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:47:09.251 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:47:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:47:09.252 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:47:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:47:09.252 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:47:09 localhost sshd[286780]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:47:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:47:11 localhost podman[240144]: time="2025-11-23T09:47:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:47:11 localhost podman[240144]: @ - - [23/Nov/2025:09:47:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152065 "" "Go-http-client/1.1"
Nov 23 04:47:11 localhost podman[240144]: @ - - [23/Nov/2025:09:47:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18200 "" "Go-http-client/1.1"
Nov 23 04:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:47:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:47:16 localhost podman[286782]: 2025-11-23 09:47:16.592697861 +0000 UTC m=+0.494304095 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Nov 23 04:47:16 localhost podman[286782]: 2025-11-23 09:47:16.631983956 +0000 UTC m=+0.533590200 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Nov 23 04:47:16 localhost systemd[1]: tmp-crun.3mKBgZ.mount: Deactivated successfully.
Nov 23 04:47:16 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:47:16 localhost podman[286783]: 2025-11-23 09:47:16.65388556 +0000 UTC m=+0.551886953 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Nov 23 04:47:16 localhost podman[286784]: 2025-11-23 09:47:16.56043504 +0000 UTC m=+0.458180316 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:47:16 localhost podman[286784]: 2025-11-23 09:47:16.690162412 +0000 UTC m=+0.587907628 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:47:16 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:47:16 localhost podman[286783]: 2025-11-23 09:47:16.714166184 +0000 UTC m=+0.612167587 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 04:47:16 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 11 from mon.1
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11 handle_mds_map i am now mds.0.11
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11 handle_mds_map state change up:standby --> up:replay
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11 replay_start
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11  waiting for osdmap 79 (which blocklists prior instance)
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.cache creating system inode with ino:0x100
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.cache creating system inode with ino:0x1
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11 Finished replaying journal
Nov 23 04:47:16 localhost ceph-mds[286319]: mds.0.11 making mds journal writeable
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 12 from mon.1
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.0.11 handle_mds_map i am now mds.0.11
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.0.11 handle_mds_map state change up:replay --> up:reconnect
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.0.11 reconnect_start
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.0.11 reopen_log
Nov 23 04:47:17 localhost ceph-mds[286319]: mds.0.11 reconnect_done
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 13 from mon.1
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.0.11 handle_mds_map i am now mds.0.11
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.0.11 handle_mds_map state change up:reconnect --> up:rejoin
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.0.11 rejoin_start
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.0.11 rejoin_joint_start
Nov 23 04:47:18 localhost ceph-mds[286319]: mds.0.11 rejoin_done
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb Updating MDS map to version 14 from mon.1
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.0.11 handle_mds_map i am now mds.0.11
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.0.11 handle_mds_map state change up:rejoin --> up:active
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.0.11 recovery_done -- successful recovery!
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.0.11 active_start
Nov 23 04:47:19 localhost ceph-mds[286319]: mds.0.11 cluster recovered.
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: ERROR   09:47:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: ERROR   09:47:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: ERROR   09:47:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: ERROR   09:47:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: ERROR   09:47:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:47:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:47:24 localhost ceph-mds[286319]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 23 04:47:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mds-mds-np0005532586-mfohsb[286315]: 2025-11-23T09:47:24.007+0000 7fb4cab4d640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 23 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:47:26 localhost podman[286856]: 2025-11-23 09:47:26.182863766 +0000 UTC m=+0.085039788 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:47:26 localhost podman[286856]: 2025-11-23 09:47:26.190773734 +0000 UTC m=+0.092949796 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:47:26 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:47:26 localhost podman[286857]: 2025-11-23 09:47:26.27323874 +0000 UTC m=+0.171913916 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:47:26 localhost podman[286857]: 2025-11-23 09:47:26.311898167 +0000 UTC m=+0.210573403 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:47:26 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:47:27 localhost sshd[286915]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:47:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:47:28 localhost systemd[1]: tmp-crun.wjQpge.mount: Deactivated successfully.
Nov 23 04:47:28 localhost podman[286917]: 2025-11-23 09:47:28.861806406 +0000 UTC m=+0.088856603 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:47:28 localhost podman[286917]: 2025-11-23 09:47:28.903413385 +0000 UTC m=+0.130463582 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:47:28 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:47:30 localhost systemd[1]: session-62.scope: Deactivated successfully.
Nov 23 04:47:30 localhost systemd[1]: session-62.scope: Consumed 1.333s CPU time.
Nov 23 04:47:30 localhost systemd-logind[761]: Session 62 logged out. Waiting for processes to exit.
Nov 23 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:47:30 localhost systemd-logind[761]: Removed session 62.
Nov 23 04:47:30 localhost podman[286942]: 2025-11-23 09:47:30.376623897 +0000 UTC m=+0.081234113 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:47:30 localhost podman[286942]: 2025-11-23 09:47:30.390898161 +0000 UTC m=+0.095508437 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd)
Nov 23 04:47:30 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:47:40 localhost systemd[1]: Stopping User Manager for UID 1003...
Nov 23 04:47:40 localhost systemd[285547]: Activating special unit Exit the Session...
Nov 23 04:47:40 localhost systemd[285547]: Stopped target Main User Target.
Nov 23 04:47:40 localhost systemd[285547]: Stopped target Basic System.
Nov 23 04:47:40 localhost systemd[285547]: Stopped target Paths.
Nov 23 04:47:40 localhost systemd[285547]: Stopped target Sockets.
Nov 23 04:47:40 localhost systemd[285547]: Stopped target Timers.
Nov 23 04:47:40 localhost systemd[285547]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 04:47:40 localhost systemd[285547]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 04:47:40 localhost systemd[285547]: Closed D-Bus User Message Bus Socket.
Nov 23 04:47:40 localhost systemd[285547]: Stopped Create User's Volatile Files and Directories.
Nov 23 04:47:40 localhost systemd[285547]: Removed slice User Application Slice.
Nov 23 04:47:40 localhost systemd[285547]: Reached target Shutdown.
Nov 23 04:47:40 localhost systemd[285547]: Finished Exit the Session.
Nov 23 04:47:40 localhost systemd[285547]: Reached target Exit the Session.
Nov 23 04:47:40 localhost systemd[1]: user@1003.service: Deactivated successfully.
Nov 23 04:47:40 localhost systemd[1]: Stopped User Manager for UID 1003.
Nov 23 04:47:40 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003...
Nov 23 04:47:40 localhost systemd[1]: run-user-1003.mount: Deactivated successfully.
Nov 23 04:47:40 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Nov 23 04:47:40 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003.
Nov 23 04:47:40 localhost systemd[1]: Removed slice User Slice of UID 1003.
Nov 23 04:47:40 localhost systemd[1]: user-1003.slice: Consumed 1.702s CPU time.
Nov 23 04:47:41 localhost podman[240144]: time="2025-11-23T09:47:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:47:41 localhost podman[240144]: @ - - [23/Nov/2025:09:47:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152065 "" "Go-http-client/1.1"
Nov 23 04:47:41 localhost podman[240144]: @ - - [23/Nov/2025:09:47:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18204 "" "Go-http-client/1.1"
Nov 23 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:47:47 localhost podman[287089]: 2025-11-23 09:47:47.180449298 +0000 UTC m=+0.081385278 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:47:47 localhost podman[287089]: 2025-11-23 09:47:47.216987756 +0000 UTC m=+0.117923696 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:47:47 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:47:47 localhost podman[287088]: 2025-11-23 09:47:47.233161562 +0000 UTC m=+0.136934211 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 04:47:47 localhost podman[287088]: 2025-11-23 09:47:47.24789849 +0000 UTC m=+0.151671149 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 23 04:47:47 localhost podman[287087]: 2025-11-23 09:47:47.282401742 +0000 UTC m=+0.188554055 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Nov 23 04:47:47 localhost podman[287087]: 2025-11-23 09:47:47.297029785 +0000 UTC m=+0.203182088 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm)
Nov 23 04:47:47 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:47:47 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: ERROR   09:47:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: ERROR   09:47:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: ERROR   09:47:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: ERROR   09:47:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: ERROR   09:47:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:47:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:47:57 localhost podman[287150]: 2025-11-23 09:47:57.181790434 +0000 UTC m=+0.087888478 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 23 04:47:57 localhost podman[287150]: 2025-11-23 09:47:57.217011715 +0000 UTC m=+0.123109719 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:47:57 localhost systemd[1]: tmp-crun.Bsxw6d.mount: Deactivated successfully.
Nov 23 04:47:57 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:47:57 localhost podman[287151]: 2025-11-23 09:47:57.233758608 +0000 UTC m=+0.136861979 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:47:57 localhost podman[287151]: 2025-11-23 09:47:57.271005006 +0000 UTC m=+0.174108417 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:47:57 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.038 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.040 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.041 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 04:47:58 localhost nova_compute[281613]: 2025-11-23 09:47:58.059 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:47:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:47:59 localhost podman[287192]: 2025-11-23 09:47:59.167708787 +0000 UTC m=+0.075616888 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Nov 23 04:47:59 localhost podman[287192]: 2025-11-23 09:47:59.236501615 +0000 UTC m=+0.144409716 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:47:59 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:48:00 localhost nova_compute[281613]: 2025-11-23 09:48:00.073 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:00 localhost nova_compute[281613]: 2025-11-23 09:48:00.073 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:48:00 localhost nova_compute[281613]: 2025-11-23 09:48:00.074 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:48:00 localhost nova_compute[281613]: 2025-11-23 09:48:00.093 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.039 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.040 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.040 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.041 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.041 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:48:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:48:01 localhost systemd[1]: tmp-crun.qDb8zV.mount: Deactivated successfully.
Nov 23 04:48:01 localhost podman[287219]: 2025-11-23 09:48:01.188164913 +0000 UTC m=+0.091397004 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:48:01 localhost podman[287219]: 2025-11-23 09:48:01.22322532 +0000 UTC m=+0.126457381 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 23 04:48:01 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.510 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.640 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.640 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12463MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.641 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.641 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.787 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.787 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.872 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.954 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.954 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:48:01 localhost nova_compute[281613]: 2025-11-23 09:48:01.972 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.008 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.035 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.487 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.492 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.508 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.511 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:48:02 localhost nova_compute[281613]: 2025-11-23 09:48:02.511 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:48:03 localhost nova_compute[281613]: 2025-11-23 09:48:03.508 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:03 localhost nova_compute[281613]: 2025-11-23 09:48:03.509 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:03 localhost nova_compute[281613]: 2025-11-23 09:48:03.510 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:03 localhost nova_compute[281613]: 2025-11-23 09:48:03.510 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:48:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:48:09.253 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:48:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:48:09.253 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:48:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:48:09.253 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:48:11 localhost podman[240144]: time="2025-11-23T09:48:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:48:11 localhost podman[240144]: @ - - [23/Nov/2025:09:48:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152065 "" "Go-http-client/1.1"
Nov 23 04:48:11 localhost podman[240144]: @ - - [23/Nov/2025:09:48:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18205 "" "Go-http-client/1.1"
Nov 23 04:48:16 localhost podman[287414]: 
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.141113549 +0000 UTC m=+0.079375411 container create 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:48:16 localhost systemd[1]: Started libpod-conmon-9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75.scope.
Nov 23 04:48:16 localhost systemd[1]: Started libcrun container.
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.106882954 +0000 UTC m=+0.045144896 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.208229572 +0000 UTC m=+0.146491434 container init 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, GIT_CLEAN=True)
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.218437364 +0000 UTC m=+0.156699236 container start 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.218719271 +0000 UTC m=+0.156981183 container attach 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-09-24T08:57:55, distribution-scope=public, vendor=Red Hat, Inc.)
Nov 23 04:48:16 localhost festive_galois[287429]: 167 167
Nov 23 04:48:16 localhost systemd[1]: libpod-9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75.scope: Deactivated successfully.
Nov 23 04:48:16 localhost podman[287414]: 2025-11-23 09:48:16.222208857 +0000 UTC m=+0.160470739 container died 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:48:16 localhost podman[287434]: 2025-11-23 09:48:16.318357262 +0000 UTC m=+0.083145396 container remove 9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_galois, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:48:16 localhost systemd[1]: libpod-conmon-9eb5fd0134e546c47029fa8375bfd4dee727c309f928377502c2d4af25fd7f75.scope: Deactivated successfully.
Nov 23 04:48:16 localhost systemd[1]: Reloading.
Nov 23 04:48:16 localhost systemd-sysv-generator[287475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:48:16 localhost systemd-rc-local-generator[287470]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: var-lib-containers-storage-overlay-4692bb8f91e8f9f65eb3124a0ef570d535b19350b665b72ed5fabada4dee0f82-merged.mount: Deactivated successfully.
Nov 23 04:48:16 localhost systemd[1]: Reloading.
Nov 23 04:48:16 localhost systemd-sysv-generator[287520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:48:16 localhost systemd-rc-local-generator[287514]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:17 localhost systemd[1]: Starting Ceph mgr.np0005532586.thmvqb for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 04:48:17 localhost podman[287579]: 
Nov 23 04:48:17 localhost podman[287579]: 2025-11-23 09:48:17.405095527 +0000 UTC m=+0.047354638 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:48:17 localhost podman[287579]: 2025-11-23 09:48:17.727791254 +0000 UTC m=+0.370050335 container create 7fea328e2ba513cb403518714e886d5d145517b526637a998afe726c13739b0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb, version=7, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12)
Nov 23 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586417456bc5b4885d9a4ae198677bed2a5866e5da1f735291035fd90546f2c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586417456bc5b4885d9a4ae198677bed2a5866e5da1f735291035fd90546f2c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586417456bc5b4885d9a4ae198677bed2a5866e5da1f735291035fd90546f2c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/586417456bc5b4885d9a4ae198677bed2a5866e5da1f735291035fd90546f2c0/merged/var/lib/ceph/mgr/ceph-np0005532586.thmvqb supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:17 localhost podman[287579]: 2025-11-23 09:48:17.79324831 +0000 UTC m=+0.435507391 container init 7fea328e2ba513cb403518714e886d5d145517b526637a998afe726c13739b0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:48:17 localhost podman[287579]: 2025-11-23 09:48:17.809718545 +0000 UTC m=+0.451977606 container start 7fea328e2ba513cb403518714e886d5d145517b526637a998afe726c13739b0c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph)
Nov 23 04:48:17 localhost bash[287579]: 7fea328e2ba513cb403518714e886d5d145517b526637a998afe726c13739b0c
Nov 23 04:48:17 localhost systemd[1]: Started Ceph mgr.np0005532586.thmvqb for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 04:48:17 localhost ceph-mgr[287623]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 04:48:17 localhost ceph-mgr[287623]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Nov 23 04:48:17 localhost ceph-mgr[287623]: pidfile_write: ignore empty --pid-file
Nov 23 04:48:17 localhost podman[287592]: 2025-11-23 09:48:17.861164944 +0000 UTC m=+0.094385735 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64)
Nov 23 04:48:17 localhost podman[287593]: 2025-11-23 09:48:17.904775778 +0000 UTC m=+0.135948682 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:48:17 localhost ceph-mgr[287623]: mgr[py] Loading python module 'alerts'
Nov 23 04:48:17 localhost podman[287592]: 2025-11-23 09:48:17.926827077 +0000 UTC m=+0.160047888 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Nov 23 04:48:17 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:48:17 localhost podman[287593]: 2025-11-23 09:48:17.940217657 +0000 UTC m=+0.171390581 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:48:17 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:48:17 localhost ceph-mgr[287623]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 04:48:17 localhost ceph-mgr[287623]: mgr[py] Loading python module 'balancer'
Nov 23 04:48:17 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:17.992+0000 7f6f51a90140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 04:48:18 localhost podman[287595]: 2025-11-23 09:48:18.021961553 +0000 UTC m=+0.245702463 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:48:18 localhost podman[287595]: 2025-11-23 09:48:18.030578571 +0000 UTC m=+0.254319451 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:48:18 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:48:18 localhost ceph-mgr[287623]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 04:48:18 localhost ceph-mgr[287623]: mgr[py] Loading python module 'cephadm'
Nov 23 04:48:18 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:18.061+0000 7f6f51a90140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 04:48:18 localhost ceph-mgr[287623]: mgr[py] Loading python module 'crash'
Nov 23 04:48:18 localhost ceph-mgr[287623]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 04:48:18 localhost ceph-mgr[287623]: mgr[py] Loading python module 'dashboard'
Nov 23 04:48:18 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:18.693+0000 7f6f51a90140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'devicehealth'
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:19.246+0000 7f6f51a90140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]:  from numpy import show_config as show_numpy_config
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:19.389+0000 7f6f51a90140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'influx'
Nov 23 04:48:19 localhost podman[287815]: 2025-11-23 09:48:19.441572206 +0000 UTC m=+0.105442932 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, architecture=x86_64, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:19.448+0000 7f6f51a90140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'insights'
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'iostat'
Nov 23 04:48:19 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:19.563+0000 7f6f51a90140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'k8sevents'
Nov 23 04:48:19 localhost podman[287815]: 2025-11-23 09:48:19.568300453 +0000 UTC m=+0.232171149 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7)
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'localpool'
Nov 23 04:48:19 localhost ceph-mgr[287623]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'mirroring'
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'nfs'
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'orchestrator'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.299+0000 7f6f51a90140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.440+0000 7f6f51a90140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'osd_support'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.503+0000 7f6f51a90140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.558+0000 7f6f51a90140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'progress'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.622+0000 7f6f51a90140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'prometheus'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.681+0000 7f6f51a90140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 04:48:20 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rbd_support'
Nov 23 04:48:20 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:20.975+0000 7f6f51a90140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:21.053+0000 7f6f51a90140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'restful'
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rgw'
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:21.375+0000 7f6f51a90140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rook'
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:21.833+0000 7f6f51a90140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'selftest'
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'snap_schedule'
Nov 23 04:48:21 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:21.893+0000 7f6f51a90140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 04:48:21 localhost ceph-mgr[287623]: mgr[py] Loading python module 'stats'
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'status'
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'telegraf'
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.082+0000 7f6f51a90140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'telemetry'
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.139+0000 7f6f51a90140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.270+0000 7f6f51a90140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: ERROR   09:48:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: ERROR   09:48:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: ERROR   09:48:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: ERROR   09:48:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: ERROR   09:48:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:48:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'volumes'
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.423+0000 7f6f51a90140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Loading python module 'zabbix'
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.625+0000 7f6f51a90140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:48:22.686+0000 7f6f51a90140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 04:48:22 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Nov 23 04:48:22 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1698075890
Nov 23 04:48:25 localhost sshd[287967]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:48:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:48:28 localhost systemd[1]: tmp-crun.EOBqAz.mount: Deactivated successfully.
Nov 23 04:48:28 localhost podman[287988]: 2025-11-23 09:48:28.185270929 +0000 UTC m=+0.088929465 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:48:28 localhost podman[287987]: 2025-11-23 09:48:28.233883181 +0000 UTC m=+0.139840171 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:48:28 localhost podman[287987]: 2025-11-23 09:48:28.243077954 +0000 UTC m=+0.149034944 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:48:28 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:48:28 localhost podman[287988]: 2025-11-23 09:48:28.299603395 +0000 UTC m=+0.203261981 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:48:28 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:48:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:48:29 localhost podman[288045]: 2025-11-23 09:48:29.759794408 +0000 UTC m=+0.081096600 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:48:29 localhost podman[288045]: 2025-11-23 09:48:29.801997062 +0000 UTC m=+0.123299274 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:48:29 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:48:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:48:31 localhost podman[288462]: 2025-11-23 09:48:31.384630725 +0000 UTC m=+0.079221778 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 23 04:48:31 localhost podman[288462]: 2025-11-23 09:48:31.423048405 +0000 UTC m=+0.117639508 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Nov 23 04:48:31 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:48:33 localhost podman[288806]: 
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.135440699 +0000 UTC m=+0.074271602 container create 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Nov 23 04:48:33 localhost systemd[1]: Started libpod-conmon-58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff.scope.
Nov 23 04:48:33 localhost systemd[1]: Started libcrun container.
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.106118669 +0000 UTC m=+0.044949602 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.210396578 +0000 UTC m=+0.149227481 container init 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.224498747 +0000 UTC m=+0.163329650 container start 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, CEPH_POINT_RELEASE=, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public)
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.224997681 +0000 UTC m=+0.163828584 container attach 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, release=553, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, io.openshift.expose-services=)
Nov 23 04:48:33 localhost systemd[1]: libpod-58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff.scope: Deactivated successfully.
Nov 23 04:48:33 localhost thirsty_engelbart[288821]: 167 167
Nov 23 04:48:33 localhost podman[288806]: 2025-11-23 09:48:33.232034694 +0000 UTC m=+0.170865607 container died 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True)
Nov 23 04:48:33 localhost podman[288826]: 2025-11-23 09:48:33.342591686 +0000 UTC m=+0.095130546 container remove 58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_engelbart, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:48:33 localhost systemd[1]: libpod-conmon-58482da470d49fe799517b0cb19300a099b9857e1d397dd92bdb3167ffe5aeff.scope: Deactivated successfully.
Nov 23 04:48:33 localhost podman[288842]: 
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.471289038 +0000 UTC m=+0.082665153 container create f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container)
Nov 23 04:48:33 localhost systemd[1]: Started libpod-conmon-f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82.scope.
Nov 23 04:48:33 localhost systemd[1]: Started libcrun container.
Nov 23 04:48:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9a759f4cc15e98d1757793927650043e63bf26170ec969dfe39022d517dfdf/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9a759f4cc15e98d1757793927650043e63bf26170ec969dfe39022d517dfdf/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9a759f4cc15e98d1757793927650043e63bf26170ec969dfe39022d517dfdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec9a759f4cc15e98d1757793927650043e63bf26170ec969dfe39022d517dfdf/merged/var/lib/ceph/mon/ceph-np0005532586 supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.53545746 +0000 UTC m=+0.146853255 container init f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7)
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.437175557 +0000 UTC m=+0.048551692 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.54453568 +0000 UTC m=+0.155911785 container start f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, release=553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.54527527 +0000 UTC m=+0.156651375 container attach f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55)
Nov 23 04:48:33 localhost systemd[1]: libpod-f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82.scope: Deactivated successfully.
Nov 23 04:48:33 localhost podman[288842]: 2025-11-23 09:48:33.639499751 +0000 UTC m=+0.250875886 container died f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:48:33 localhost podman[288883]: 2025-11-23 09:48:33.750749142 +0000 UTC m=+0.098056797 container remove f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_jones, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True)
Nov 23 04:48:33 localhost systemd[1]: libpod-conmon-f18997843b4064fc727b0a7534deb052d64346bba055343955efca1a46f6bc82.scope: Deactivated successfully.
Nov 23 04:48:33 localhost systemd[1]: Reloading.
Nov 23 04:48:33 localhost systemd-sysv-generator[288926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:48:33 localhost systemd-rc-local-generator[288920]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-186656dde4893da7f44fdbd229da787b989dc37ef9b7d90ef1f1aa629352ffc3-merged.mount: Deactivated successfully.
Nov 23 04:48:34 localhost systemd[1]: Reloading.
Nov 23 04:48:34 localhost systemd-rc-local-generator[288964]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:48:34 localhost systemd-sysv-generator[288967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:48:34 localhost systemd[1]: Starting Ceph mon.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 04:48:35 localhost podman[289025]: 
Nov 23 04:48:35 localhost podman[289025]: 2025-11-23 09:48:35.068722209 +0000 UTC m=+0.085796889 container create b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, RELEASE=main)
Nov 23 04:48:35 localhost systemd[1]: tmp-crun.0B8o1g.mount: Deactivated successfully.
Nov 23 04:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d4f2605014da2e23fb9ab0c41f9cc240870e737345da101948519c4341da2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d4f2605014da2e23fb9ab0c41f9cc240870e737345da101948519c4341da2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d4f2605014da2e23fb9ab0c41f9cc240870e737345da101948519c4341da2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d4f2605014da2e23fb9ab0c41f9cc240870e737345da101948519c4341da2a/merged/var/lib/ceph/mon/ceph-np0005532586 supports timestamps until 2038 (0x7fffffff)
Nov 23 04:48:35 localhost podman[289025]: 2025-11-23 09:48:35.131226894 +0000 UTC m=+0.148301574 container init b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=553, version=7, name=rhceph, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.expose-services=)
Nov 23 04:48:35 localhost podman[289025]: 2025-11-23 09:48:35.034595387 +0000 UTC m=+0.051670117 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:48:35 localhost podman[289025]: 2025-11-23 09:48:35.143608256 +0000 UTC m=+0.160682936 container start b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.33.12, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Nov 23 04:48:35 localhost bash[289025]: b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7
Nov 23 04:48:35 localhost systemd[1]: Started Ceph mon.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 04:48:35 localhost ceph-mon[289043]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 04:48:35 localhost ceph-mon[289043]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Nov 23 04:48:35 localhost ceph-mon[289043]: pidfile_write: ignore empty --pid-file
Nov 23 04:48:35 localhost ceph-mon[289043]: load: jerasure load: lrc 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: RocksDB version: 7.9.2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Git sha 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: DB SUMMARY
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: DB Session ID:  HZ2TGTJV75XDPMSTGKX9
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: CURRENT file:  CURRENT
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005532586/store.db dir, Total Num: 0, files: 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005532586/store.db: 000004.log size: 761 ; 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                         Options.error_if_exists: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.create_if_missing: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                                     Options.env: 0x55940c8db9e0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                                Options.info_log: 0x55940d508d20
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                              Options.statistics: (nil)
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                               Options.use_fsync: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                              Options.db_log_dir: 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                                 Options.wal_dir: 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                    Options.write_buffer_manager: 0x55940d519540
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.unordered_write: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                               Options.row_cache: None
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                              Options.wal_filter: None
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.two_write_queues: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.wal_compression: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.atomic_flush: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.max_background_jobs: 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.max_background_compactions: -1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.max_subcompactions: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                          Options.max_open_files: -1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Compression algorithms supported:
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kZSTD supported: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kXpressCompression supported: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kZlibCompression supported: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005532586/store.db/MANIFEST-000005
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:           Options.merge_operator: 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:        Options.compaction_filter: None
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55940d508980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55940d505350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.compression: NoCompression
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.num_levels: 7
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                           Options.bloom_locality: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                               Options.ttl: 2592000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                       Options.enable_blob_files: false
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                           Options.min_blob_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005532586/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1978a202-f4a3-46d3-80fc-ff640bbe93f1
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891315197718, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891315200163, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891315200268, "job": 1, "event": "recovery_finished"}
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55940d52ce00
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: DB pointer 0x55940d622000
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586 does not exist in monmap, will attempt to join an existing cluster
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:48:35 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55940d505350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 04:48:35 localhost ceph-mon[289043]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Nov 23 04:48:35 localhost ceph-mon[289043]: starting mon.np0005532586 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005532586 fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(???) e0 preinit fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing) e3 sync_obtain_latest_monmap
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).mds e16 new map
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T08:00:26.486221+0000#012modified#0112025-11-23T09:47:19.846415+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26392}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26392 members: 26392#012[mds.mds.np0005532586.mfohsb{0:26392} state up:active seq 12 addr [v2:172.18.0.108:6808/2718449296,v1:172.18.0.108:6809/2718449296] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005532585.jcltnl{-1:17133} state up:standby seq 1 addr [v2:172.18.0.107:6808/563301557,v1:172.18.0.107:6809/563301557] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005532584.aoxjmw{-1:17139} state up:standby seq 1 addr [v2:172.18.0.106:6808/2261302276,v1:172.18.0.106:6809/2261302276] compat {c=[1],r=[1],i=[17ff]}]
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).osd e81 crush map has features 3314933000852226048, adjusting msgr requires
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).osd e81 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth rm", "entity": "mds.mds.np0005532583.nwcrcp"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth rm", "entity": "mds.mds.np0005532583.nwcrcp"}]': finished
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Removing key for mds.mds.np0005532583.nwcrcp
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mgr to host np0005532584.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mgr to host np0005532585.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mgr to host np0005532586.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Saving service mgr spec with placement label:mgr
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 04:48:35 localhost ceph-mon[289043]: Deploying daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 04:48:35 localhost ceph-mon[289043]: Deploying daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532581.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532581.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: Deploying daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532582.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532582.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532583.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532583.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532584.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532584.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532585.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532585.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label mon to host np0005532586.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Added label _admin to host np0005532586.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:35 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:48:35 localhost ceph-mon[289043]: Deploying daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:48:35 localhost ceph-mon[289043]: mon.np0005532586@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3
Nov 23 04:48:35 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Nov 23 04:48:37 localhost ceph-mon[289043]: mon.np0005532586@-1(probing) e4  my rank is now 3 (was -1)
Nov 23 04:48:37 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:48:37 localhost ceph-mon[289043]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 23 04:48:37 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:38 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Nov 23 04:48:38 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:40 localhost ceph-mon[289043]: mgrc update_daemon_metadata mon.np0005532586 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005532586.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005532586.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116604,os=Linux}
Nov 23 04:48:40 localhost ceph-mon[289043]: Deploying daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532581 calling monitor election
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:48:40 localhost ceph-mon[289043]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586 in quorum (ranks 0,1,2,3)
Nov 23 04:48:40 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:48:40 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:40 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:40 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:40 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:48:41 localhost podman[240144]: time="2025-11-23T09:48:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:48:41 localhost podman[240144]: @ - - [23/Nov/2025:09:48:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:48:41 localhost podman[240144]: @ - - [23/Nov/2025:09:48:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19158 "" "Go-http-client/1.1"
Nov 23 04:48:41 localhost ceph-mon[289043]: Deploying daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:48:42 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Nov 23 04:48:42 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf080 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Nov 23 04:48:42 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:48:42 localhost ceph-mon[289043]: paxos.3).electionLogic(18) init, last seen epoch 18
Nov 23 04:48:42 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:42 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:43 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Nov 23 04:48:43 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Nov 23 04:48:43 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Nov 23 04:48:45 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Nov 23 04:48:46 localhost ceph-mds[286319]: mds.beacon.mds.np0005532586.mfohsb missed beacon ack from the monitors
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532581 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586,np0005532585 in quorum (ranks 0,1,2,3,4)
Nov 23 04:48:47 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:48:47 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:47 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Nov 23 04:48:47 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Nov 23 04:48:47 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:48:47 localhost ceph-mon[289043]: paxos.3).electionLogic(22) init, last seen epoch 22
Nov 23 04:48:47 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:48:48 localhost podman[289082]: 2025-11-23 09:48:48.176276881 +0000 UTC m=+0.082108667 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 04:48:48 localhost podman[289082]: 2025-11-23 09:48:48.194077992 +0000 UTC m=+0.099909808 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 04:48:48 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:48:48 localhost podman[289083]: 2025-11-23 09:48:48.283039877 +0000 UTC m=+0.185796369 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 23 04:48:48 localhost podman[289083]: 2025-11-23 09:48:48.298766612 +0000 UTC m=+0.201523094 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:48:48 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:48:48 localhost podman[289084]: 2025-11-23 09:48:48.383725486 +0000 UTC m=+0.283146966 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:48:48 localhost podman[289084]: 2025-11-23 09:48:48.391334107 +0000 UTC m=+0.290755597 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:48:48 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: ERROR   09:48:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: ERROR   09:48:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: ERROR   09:48:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: ERROR   09:48:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: ERROR   09:48:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:48:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:48:52 localhost ceph-mon[289043]: paxos.3).electionLogic(23) init, last seen epoch 23, mid-election, bumping
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532586@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532581 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:48:52 localhost ceph-mon[289043]: mon.np0005532581 is new leader, mons np0005532581,np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4,5)
Nov 23 04:48:52 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:48:52 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:54 localhost podman[289267]: 2025-11-23 09:48:54.106068788 +0000 UTC m=+0.090020485 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.expose-services=, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7)
Nov 23 04:48:54 localhost podman[289267]: 2025-11-23 09:48:54.215344994 +0000 UTC m=+0.199296691 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Nov 23 04:48:55 localhost sshd[289456]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:55 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:56 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:57 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:48:58 localhost ceph-mon[289043]: Reconfiguring mon.np0005532581 (monmap changed)...
Nov 23 04:48:58 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532581 on np0005532581.localdomain
Nov 23 04:48:58 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:58 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:58 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532581.sxlgsx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:48:59 localhost podman[289796]: 2025-11-23 09:48:59.187774088 +0000 UTC m=+0.089506911 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:48:59 localhost podman[289796]: 2025-11-23 09:48:59.224289846 +0000 UTC m=+0.126022689 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:48:59 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:48:59 localhost podman[289797]: 2025-11-23 09:48:59.245753088 +0000 UTC m=+0.147782729 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:48:59 localhost podman[289797]: 2025-11-23 09:48:59.259019554 +0000 UTC m=+0.161049235 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:48:59 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:48:59 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532581.sxlgsx (monmap changed)...
Nov 23 04:48:59 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532581.sxlgsx on np0005532581.localdomain
Nov 23 04:48:59 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:59 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:48:59 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532581.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:49:00 localhost podman[289836]: 2025-11-23 09:49:00.176287802 +0000 UTC m=+0.082710224 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:49:00 localhost podman[289836]: 2025-11-23 09:49:00.220856271 +0000 UTC m=+0.127278683 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 23 04:49:00 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:49:00 localhost ceph-mon[289043]: Reconfiguring crash.np0005532581 (monmap changed)...
Nov 23 04:49:00 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532581 on np0005532581.localdomain
Nov 23 04:49:00 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:00 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:00 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:01 localhost nova_compute[281613]: 2025-11-23 09:49:01.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:01 localhost ceph-mon[289043]: Reconfiguring crash.np0005532582 (monmap changed)...
Nov 23 04:49:01 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain
Nov 23 04:49:01 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:01 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:01 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.047 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.047 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.047 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.048 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.074 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.075 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.076 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.076 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.077 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:49:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:49:02 localhost podman[289862]: 2025-11-23 09:49:02.183579096 +0000 UTC m=+0.084693289 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:49:02 localhost podman[289862]: 2025-11-23 09:49:02.226739847 +0000 UTC m=+0.127854030 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:49:02 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.542 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.769 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.772 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11989MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.772 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.772 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.845 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.845 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:49:02 localhost nova_compute[281613]: 2025-11-23 09:49:02.882 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:49:02 localhost ceph-mon[289043]: Reconfiguring mon.np0005532582 (monmap changed)...
Nov 23 04:49:02 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain
Nov 23 04:49:02 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:02 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:02 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:03 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:49:03 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/593143704' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:49:03 localhost nova_compute[281613]: 2025-11-23 09:49:03.323 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:49:03 localhost nova_compute[281613]: 2025-11-23 09:49:03.332 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:49:03 localhost nova_compute[281613]: 2025-11-23 09:49:03.352 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:49:03 localhost nova_compute[281613]: 2025-11-23 09:49:03.355 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:49:03 localhost nova_compute[281613]: 2025-11-23 09:49:03.355 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:49:04 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)...
Nov 23 04:49:04 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain
Nov 23 04:49:04 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:04 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:04 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.327 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.328 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.350 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.351 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.351 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:04 localhost nova_compute[281613]: 2025-11-23 09:49:04.352 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:49:05 localhost ceph-mon[289043]: Reconfiguring mon.np0005532583 (monmap changed)...
Nov 23 04:49:05 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:49:05 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:05 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:05 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:49:05 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:05 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:49:05 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:06 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:06 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:49:06 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:06 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:49:06 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:06 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:06 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:49:06 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:06 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:49:07 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:07 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' 
Nov 23 04:49:07 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:49:07 localhost ceph-mon[289043]: from='mgr.14120 172.18.0.103:0/1791364452' entity='mgr.np0005532581.sxlgsx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:49:07 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:49:07 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e81 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 04:49:07 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e81 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 04:49:07 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e82 e82: 6 total, 6 up, 6 in
Nov 23 04:49:07 localhost systemd[1]: session-14.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-25.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-22.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-19.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-18.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-23.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-17.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 17 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 18 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 14 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 19 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 23 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 25 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 22 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 14.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 25.
Nov 23 04:49:07 localhost systemd[1]: session-20.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-16.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 20 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd[1]: session-26.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-26.scope: Consumed 3min 36.527s CPU time.
Nov 23 04:49:07 localhost systemd[1]: session-21.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd[1]: session-24.scope: Deactivated successfully.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 16 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 21 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 26 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Session 24 logged out. Waiting for processes to exit.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 22.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 19.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 18.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 23.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 17.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 20.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 16.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 26.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 21.
Nov 23 04:49:07 localhost systemd-logind[761]: Removed session 24.
Nov 23 04:49:08 localhost sshd[289924]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:49:08 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:49:08 localhost ceph-mon[289043]: Activating manager daemon np0005532583.orhywt
Nov 23 04:49:08 localhost ceph-mon[289043]: from='client.? 172.18.0.103:0/443540260' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:49:08 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:49:08 localhost ceph-mon[289043]: Manager daemon np0005532583.orhywt is now available
Nov 23 04:49:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/mirror_snapshot_schedule"} : dispatch
Nov 23 04:49:08 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/mirror_snapshot_schedule"} : dispatch
Nov 23 04:49:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/trash_purge_schedule"} : dispatch
Nov 23 04:49:08 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532583.orhywt/trash_purge_schedule"} : dispatch
Nov 23 04:49:08 localhost systemd-logind[761]: New session 64 of user ceph-admin.
Nov 23 04:49:08 localhost systemd[1]: Started Session 64 of User ceph-admin.
Nov 23 04:49:09 localhost systemd[1]: tmp-crun.nkrtLc.mount: Deactivated successfully.
Nov 23 04:49:09 localhost podman[290035]: 2025-11-23 09:49:09.243239378 +0000 UTC m=+0.103242040 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git)
Nov 23 04:49:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:49:09.254 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:49:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:49:09.255 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:49:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:49:09.256 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:49:09 localhost podman[290035]: 2025-11-23 09:49:09.360096094 +0000 UTC m=+0.220098736 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.component=rhceph-container, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Nov 23 04:49:09 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:09 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:49:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:49:10 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e82 _set_new_cache_sizes cache_size:1019613614 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:10 localhost ceph-mon[289043]: [23/Nov/2025:09:49:09] ENGINE Bus STARTING
Nov 23 04:49:10 localhost ceph-mon[289043]: [23/Nov/2025:09:49:09] ENGINE Serving on http://172.18.0.105:8765
Nov 23 04:49:10 localhost ceph-mon[289043]: [23/Nov/2025:09:49:09] ENGINE Serving on https://172.18.0.105:7150
Nov 23 04:49:10 localhost ceph-mon[289043]: [23/Nov/2025:09:49:09] ENGINE Bus STARTED
Nov 23 04:49:10 localhost ceph-mon[289043]: [23/Nov/2025:09:49:09] ENGINE Client ('172.18.0.105', 46326) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:10 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:11 localhost podman[240144]: time="2025-11-23T09:49:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:49:11 localhost podman[240144]: @ - - [23/Nov/2025:09:49:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:49:11 localhost podman[240144]: @ - - [23/Nov/2025:09:49:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19169 "" "Go-http-client/1.1"
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532581", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532581", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:49:12 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:12 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:13 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:14 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:15 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e82 _set_new_cache_sizes cache_size:1020043752 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:15 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:49:15 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:49:15 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:49:16 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:16 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:16 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:49:16 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:49:17 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:49:17 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:17 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:17 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:49:17 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:17 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:17 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:18 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:18 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:18 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:49:19 localhost podman[290938]: 2025-11-23 09:49:19.196236882 +0000 UTC m=+0.096665068 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:49:19 localhost systemd[1]: tmp-crun.LvPfsY.mount: Deactivated successfully.
Nov 23 04:49:19 localhost podman[290939]: 2025-11-23 09:49:19.290635648 +0000 UTC m=+0.188378791 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:49:19 localhost podman[290937]: 2025-11-23 09:49:19.247371443 +0000 UTC m=+0.147916914 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, version=9.6)
Nov 23 04:49:19 localhost podman[290938]: 2025-11-23 09:49:19.310485985 +0000 UTC m=+0.210914131 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:49:19 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:49:19 localhost podman[290937]: 2025-11-23 09:49:19.332090972 +0000 UTC m=+0.232636473 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 04:49:19 localhost podman[290939]: 2025-11-23 09:49:19.350166191 +0000 UTC m=+0.247909324 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:49:19 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:49:19 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:49:20 localhost ceph-mon[289043]: mon.np0005532586@3(peon).osd e82 _set_new_cache_sizes cache_size:1020054457 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:20 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:49:20 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:20 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:20 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:20 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:49:21 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:49:21 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:49:21 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:21 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: ERROR   09:49:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: ERROR   09:49:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: ERROR   09:49:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: ERROR   09:49:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: ERROR   09:49:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:49:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:49:22 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:49:22 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:23 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:23 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:23 localhost ceph-mon[289043]: from='mgr.14190 ' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:24 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586@3(peon) e7  my rank is now 2 (was 3)
Nov 23 04:49:24 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Nov 23 04:49:24 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Nov 23 04:49:24 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: paxos.2).electionLogic(26) init, last seen epoch 26
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:49:24 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:49:24 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:49:24 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:49:24 localhost ceph-mon[289043]: Remove daemons mon.np0005532581
Nov 23 04:49:24 localhost ceph-mon[289043]: Safe to remove mon.np0005532581: new quorum should be ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585', 'np0005532584'])
Nov 23 04:49:24 localhost ceph-mon[289043]: Removing monitor np0005532581 from monmap...
Nov 23 04:49:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "mon rm", "name": "np0005532581"} : dispatch
Nov 23 04:49:24 localhost ceph-mon[289043]: Removing daemon mon.np0005532581 from np0005532581.localdomain -- ports []
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:49:24 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4)
Nov 23 04:49:24 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:49:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:25 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054725 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:25 localhost ceph-mon[289043]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:49:25 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:49:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:26 localhost podman[291052]: 
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.097725009 +0000 UTC m=+0.076774901 container create d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:49:26 localhost systemd[1]: Started libpod-conmon-d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02.scope.
Nov 23 04:49:26 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.066654281 +0000 UTC m=+0.045704193 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.16915048 +0000 UTC m=+0.148200372 container init d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.179195387 +0000 UTC m=+0.158245279 container start d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, release=553)
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.179716792 +0000 UTC m=+0.158766684 container attach d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:49:26 localhost kind_hoover[291066]: 167 167
Nov 23 04:49:26 localhost systemd[1]: libpod-d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02.scope: Deactivated successfully.
Nov 23 04:49:26 localhost podman[291052]: 2025-11-23 09:49:26.183484645 +0000 UTC m=+0.162534557 container died d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=553, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True)
Nov 23 04:49:26 localhost podman[291071]: 2025-11-23 09:49:26.262055924 +0000 UTC m=+0.067227787 container remove d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_hoover, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, release=553, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, architecture=x86_64, distribution-scope=public)
Nov 23 04:49:26 localhost systemd[1]: libpod-conmon-d109feb0043e5afebb3ae8af5c2ce988abdaa16ec4f516c89a88c7d68e289c02.scope: Deactivated successfully.
Nov 23 04:49:26 localhost ceph-mon[289043]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:49:26 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:49:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:49:26 localhost podman[291141]: 
Nov 23 04:49:26 localhost podman[291141]: 2025-11-23 09:49:26.97889981 +0000 UTC m=+0.076433371 container create 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12)
Nov 23 04:49:27 localhost systemd[1]: Started libpod-conmon-3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6.scope.
Nov 23 04:49:27 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:27 localhost podman[291141]: 2025-11-23 09:49:27.044748758 +0000 UTC m=+0.142282289 container init 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, io.buildah.version=1.33.12, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main)
Nov 23 04:49:27 localhost podman[291141]: 2025-11-23 09:49:26.947896264 +0000 UTC m=+0.045429835 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:27 localhost podman[291141]: 2025-11-23 09:49:27.054883357 +0000 UTC m=+0.152416878 container start 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:49:27 localhost podman[291141]: 2025-11-23 09:49:27.055158934 +0000 UTC m=+0.152692495 container attach 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Nov 23 04:49:27 localhost hardcore_bell[291156]: 167 167
Nov 23 04:49:27 localhost systemd[1]: libpod-3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6.scope: Deactivated successfully.
Nov 23 04:49:27 localhost podman[291141]: 2025-11-23 09:49:27.058792125 +0000 UTC m=+0.156325656 container died 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12)
Nov 23 04:49:27 localhost systemd[1]: var-lib-containers-storage-overlay-102b61c1b9542a9ffeaeabba93c48dd57b62ecd8d0649f65befff2e98a872156-merged.mount: Deactivated successfully.
Nov 23 04:49:27 localhost systemd[1]: var-lib-containers-storage-overlay-b3f134465a2a35f06f1adc7f97502b0c02d99ee9f9dbc92a5d6d5901fc863963-merged.mount: Deactivated successfully.
Nov 23 04:49:27 localhost podman[291161]: 2025-11-23 09:49:27.154565358 +0000 UTC m=+0.086743705 container remove 3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_bell, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, release=553, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:49:27 localhost systemd[1]: libpod-conmon-3810f4b0ddb19a6efe114c31140bd8fc6ce9737790ce690292370778802611c6.scope: Deactivated successfully.
Nov 23 04:49:27 localhost ceph-mon[289043]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:49:27 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:49:27 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:27 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:27 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:49:27 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:27 localhost podman[291237]: 
Nov 23 04:49:27 localhost podman[291237]: 2025-11-23 09:49:27.937272061 +0000 UTC m=+0.072977425 container create fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7)
Nov 23 04:49:27 localhost systemd[1]: Started libpod-conmon-fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c.scope.
Nov 23 04:49:27 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:28 localhost podman[291237]: 2025-11-23 09:49:28.001412472 +0000 UTC m=+0.137117866 container init fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, release=553, RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True)
Nov 23 04:49:28 localhost podman[291237]: 2025-11-23 09:49:28.008053626 +0000 UTC m=+0.143758990 container start fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, version=7)
Nov 23 04:49:28 localhost podman[291237]: 2025-11-23 09:49:27.908810756 +0000 UTC m=+0.044516150 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:28 localhost podman[291237]: 2025-11-23 09:49:28.008372454 +0000 UTC m=+0.144077818 container attach fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_BRANCH=main, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, name=rhceph)
Nov 23 04:49:28 localhost confident_roentgen[291252]: 167 167
Nov 23 04:49:28 localhost systemd[1]: libpod-fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c.scope: Deactivated successfully.
Nov 23 04:49:28 localhost podman[291237]: 2025-11-23 09:49:28.009937857 +0000 UTC m=+0.145643231 container died fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, name=rhceph, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:49:28 localhost podman[291257]: 2025-11-23 09:49:28.096475946 +0000 UTC m=+0.077728657 container remove fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_roentgen, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Nov 23 04:49:28 localhost systemd[1]: var-lib-containers-storage-overlay-e9c3c685eee01d65be0feca554cc257b0128e20c74cae7c012ffb6ec84a4be6e-merged.mount: Deactivated successfully.
Nov 23 04:49:28 localhost systemd[1]: libpod-conmon-fd80cb094a65fc6bdf65592186ef1226b396cf057fd1e35c8e10056b3ebde60c.scope: Deactivated successfully.
Nov 23 04:49:28 localhost ceph-mon[289043]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:49:28 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:49:28 localhost ceph-mon[289043]: Removed label mon from host np0005532581.localdomain
Nov 23 04:49:28 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:28 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:28 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:28 localhost podman[291334]: 
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.876916327 +0000 UTC m=+0.065041607 container create 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, io.buildah.version=1.33.12, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Nov 23 04:49:28 localhost systemd[1]: Started libpod-conmon-00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779.scope.
Nov 23 04:49:28 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.937027746 +0000 UTC m=+0.125153026 container init 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, version=7, GIT_CLEAN=True, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public)
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.94478619 +0000 UTC m=+0.132911460 container start 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, description=Red Hat Ceph Storage 7, release=553, version=7, ceph=True, io.openshift.expose-services=, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True)
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.944999956 +0000 UTC m=+0.133125236 container attach 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-type=git)
Nov 23 04:49:28 localhost focused_jones[291349]: 167 167
Nov 23 04:49:28 localhost systemd[1]: libpod-00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779.scope: Deactivated successfully.
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.947486065 +0000 UTC m=+0.135611395 container died 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, name=rhceph, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:49:28 localhost podman[291334]: 2025-11-23 09:49:28.855516476 +0000 UTC m=+0.043641776 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:29 localhost podman[291354]: 2025-11-23 09:49:29.031451972 +0000 UTC m=+0.072852042 container remove 00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_jones, GIT_BRANCH=main, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64)
Nov 23 04:49:29 localhost systemd[1]: libpod-conmon-00504191719aea122d29de63cf3f24ba05851c893cf162f1add80ee5c9ba5779.scope: Deactivated successfully.
Nov 23 04:49:29 localhost systemd[1]: var-lib-containers-storage-overlay-f7f5f84c85f83a17ca7b9b312cecde5ea7ddb6c66c086f2c861c3931bd3b8039-merged.mount: Deactivated successfully.
Nov 23 04:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:49:29 localhost podman[291408]: 2025-11-23 09:49:29.380779624 +0000 UTC m=+0.087261200 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:49:29 localhost podman[291408]: 2025-11-23 09:49:29.412129669 +0000 UTC m=+0.118611275 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:49:29 localhost systemd[1]: tmp-crun.UENozz.mount: Deactivated successfully.
Nov 23 04:49:29 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:49:29 localhost podman[291409]: 2025-11-23 09:49:29.434218589 +0000 UTC m=+0.137518677 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:49:29 localhost podman[291409]: 2025-11-23 09:49:29.445869431 +0000 UTC m=+0.149169539 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:49:29 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:49:29 localhost podman[291466]: 
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.707240004 +0000 UTC m=+0.072644126 container create d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, vcs-type=git, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, version=7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7)
Nov 23 04:49:29 localhost systemd[1]: Started libpod-conmon-d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26.scope.
Nov 23 04:49:29 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.77412341 +0000 UTC m=+0.139527532 container init d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, build-date=2025-09-24T08:57:55, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, release=553, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., version=7)
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.678051499 +0000 UTC m=+0.043455681 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.785039732 +0000 UTC m=+0.150443864 container start d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, io.openshift.expose-services=, release=553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.785380931 +0000 UTC m=+0.150785093 container attach d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.openshift.tags=rhceph ceph)
Nov 23 04:49:29 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:49:29 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:49:29 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:29 localhost ceph-mon[289043]: Removed label mgr from host np0005532581.localdomain
Nov 23 04:49:29 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:29 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:29 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:29 localhost friendly_poincare[291481]: 167 167
Nov 23 04:49:29 localhost systemd[1]: libpod-d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26.scope: Deactivated successfully.
Nov 23 04:49:29 localhost podman[291466]: 2025-11-23 09:49:29.789627069 +0000 UTC m=+0.155031361 container died d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph)
Nov 23 04:49:29 localhost podman[291486]: 2025-11-23 09:49:29.884508317 +0000 UTC m=+0.082992331 container remove d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_poincare, ceph=True, io.openshift.expose-services=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main)
Nov 23 04:49:29 localhost systemd[1]: libpod-conmon-d3c6af50db211b55355fba044efbfa069ae28c72fc2aecf7ea34036361061a26.scope: Deactivated successfully.
Nov 23 04:49:30 localhost systemd[1]: var-lib-containers-storage-overlay-c7609b0129b2cbc1bd0cf2895b85b389fdc39fd8989551d2bfc722f759c73e96-merged.mount: Deactivated successfully.
Nov 23 04:49:30 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:49:30 localhost podman[291553]: 2025-11-23 09:49:30.586443261 +0000 UTC m=+0.085555393 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 23 04:49:30 localhost podman[291561]: 
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.613768696 +0000 UTC m=+0.089681597 container create d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:49:30 localhost podman[291553]: 2025-11-23 09:49:30.659015734 +0000 UTC m=+0.158127836 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 23 04:49:30 localhost systemd[1]: Started libpod-conmon-d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21.scope.
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.575358615 +0000 UTC m=+0.051271596 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:49:30 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:49:30 localhost systemd[1]: Started libcrun container.
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.692421656 +0000 UTC m=+0.168334557 container init d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, version=7, io.buildah.version=1.33.12)
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.702110703 +0000 UTC m=+0.178023614 container start d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, release=553, name=rhceph, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.702379621 +0000 UTC m=+0.178292552 container attach d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, build-date=2025-09-24T08:57:55, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, GIT_BRANCH=main, release=553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=)
Nov 23 04:49:30 localhost infallible_knuth[291593]: 167 167
Nov 23 04:49:30 localhost systemd[1]: libpod-d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21.scope: Deactivated successfully.
Nov 23 04:49:30 localhost podman[291561]: 2025-11-23 09:49:30.705770295 +0000 UTC m=+0.181683216 container died d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:49:30 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:49:30 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:49:30 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:30 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:30 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:30 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:30 localhost podman[291599]: 2025-11-23 09:49:30.808171951 +0000 UTC m=+0.089649946 container remove d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_knuth, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:49:30 localhost systemd[1]: libpod-conmon-d29ce4f08fb3413077113790c984e66f199379b8a42fa2fc4218ce2a1d42dd21.scope: Deactivated successfully.
Nov 23 04:49:31 localhost systemd[1]: var-lib-containers-storage-overlay-143a63993e81cc5aff10808f96d4a8a0dec1396758ec40a490ae8b6d16853beb-merged.mount: Deactivated successfully.
Nov 23 04:49:31 localhost ceph-mon[289043]: Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:49:31 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:49:31 localhost ceph-mon[289043]: Removed label _admin from host np0005532581.localdomain
Nov 23 04:49:31 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:31 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:49:32 localhost podman[291635]: 2025-11-23 09:49:32.622602241 +0000 UTC m=+0.076802771 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 23 04:49:32 localhost podman[291635]: 2025-11-23 09:49:32.635338982 +0000 UTC m=+0.089539572 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:49:32 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:49:32 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:32 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:32 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:49:32 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:32 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.927734) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373927814, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11993, "num_deletes": 513, "total_data_size": 17675839, "memory_usage": 18490736, "flush_reason": "Manual Compaction"}
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 23 04:49:33 localhost ceph-mon[289043]: Removing np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:49:33 localhost ceph-mon[289043]: Removing np0005532581.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:49:33 localhost ceph-mon[289043]: Removing np0005532581.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373979210, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12135211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11998, "table_properties": {"data_size": 12078828, "index_size": 30101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25221, "raw_key_size": 263964, "raw_average_key_size": 26, "raw_value_size": 11907456, "raw_average_value_size": 1180, "num_data_blocks": 1146, "num_entries": 10083, "num_filter_entries": 10083, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 1763891315, "file_creation_time": 1763891373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 51548 microseconds, and 26061 cpu microseconds.
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.979278) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12135211 bytes OK
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.979307) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.981465) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.981488) EVENT_LOG_v1 {"time_micros": 1763891373981482, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.981506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 17597375, prev total WAL file size 17622086, number of live WAL files 2.
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.984473) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1887B)]
Nov 23 04:49:33 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891373984610, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12137098, "oldest_snapshot_seqno": -1}
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9574 keys, 12127386 bytes, temperature: kUnknown
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374044334, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12127386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12072331, "index_size": 30058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 255548, "raw_average_key_size": 26, "raw_value_size": 11907514, "raw_average_value_size": 1243, "num_data_blocks": 1144, "num_entries": 9574, "num_filter_entries": 9574, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891373, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:34.044725) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12127386 bytes
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:34.046281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.8 rd, 202.6 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10088, records dropped: 514 output_compression: NoCompression
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:34.046311) EVENT_LOG_v1 {"time_micros": 1763891374046297, "job": 4, "event": "compaction_finished", "compaction_time_micros": 59844, "compaction_time_cpu_micros": 33936, "output_level": 6, "num_output_files": 1, "total_output_size": 12127386, "num_input_records": 10088, "num_output_records": 9574, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374048190, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891374048245, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 23 04:49:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:33.984394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:34 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:34 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:34 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:34 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:34 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:49:34 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:34 localhost ceph-mon[289043]: Removing daemon mgr.np0005532581.sxlgsx from np0005532581.localdomain -- ports [9283, 8765]
Nov 23 04:49:35 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:37 localhost ceph-mon[289043]: Removing key for mgr.np0005532581.sxlgsx
Nov 23 04:49:37 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth rm", "entity": "mgr.np0005532581.sxlgsx"} : dispatch
Nov 23 04:49:37 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005532581.sxlgsx"}]': finished
Nov 23 04:49:37 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:37 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:38 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:38 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:38 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:38 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:49:38 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:39 localhost ceph-mon[289043]: Reconfiguring crash.np0005532581 (monmap changed)...
Nov 23 04:49:39 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532581.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:39 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532581 on np0005532581.localdomain
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.784205) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379784258, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 256, "total_data_size": 303488, "memory_usage": 313488, "flush_reason": "Manual Compaction"}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379790918, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 194587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12003, "largest_seqno": 12460, "table_properties": {"data_size": 191999, "index_size": 571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7023, "raw_average_key_size": 19, "raw_value_size": 186389, "raw_average_value_size": 514, "num_data_blocks": 26, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891373, "oldest_key_time": 1763891373, "file_creation_time": 1763891379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6763 microseconds, and 1289 cpu microseconds.
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.790967) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 194587 bytes OK
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.790989) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.792711) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.792729) EVENT_LOG_v1 {"time_micros": 1763891379792723, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.792748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 300514, prev total WAL file size 300838, number of live WAL files 2.
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.793558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353231' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end)
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(190KB)], [15(11MB)]
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379793643, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12321973, "oldest_snapshot_seqno": -1}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9402 keys, 12213720 bytes, temperature: kUnknown
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379866966, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12213720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12158980, "index_size": 30127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 253123, "raw_average_key_size": 26, "raw_value_size": 11996460, "raw_average_value_size": 1275, "num_data_blocks": 1145, "num_entries": 9402, "num_filter_entries": 9402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.867308) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12213720 bytes
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.9 rd, 166.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(126.1) write-amplify(62.8) OK, records in: 9936, records dropped: 534 output_compression: NoCompression
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.869293) EVENT_LOG_v1 {"time_micros": 1763891379869279, "job": 6, "event": "compaction_finished", "compaction_time_micros": 73395, "compaction_time_cpu_micros": 35840, "output_level": 6, "num_output_files": 1, "total_output_size": 12213720, "num_input_records": 9936, "num_output_records": 9402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379869452, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891379871076, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.793389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.871133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.871140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.871144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.871147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:49:39.871150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:49:40 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:40 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:40 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:40 localhost ceph-mon[289043]: Reconfiguring crash.np0005532582 (monmap changed)...
Nov 23 04:49:40 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:40 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain
Nov 23 04:49:40 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:40 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:41 localhost podman[240144]: time="2025-11-23T09:49:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:49:41 localhost podman[240144]: @ - - [23/Nov/2025:09:49:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:49:41 localhost podman[240144]: @ - - [23/Nov/2025:09:49:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19175 "" "Go-http-client/1.1"
Nov 23 04:49:41 localhost ceph-mon[289043]: Reconfiguring mon.np0005532582 (monmap changed)...
Nov 23 04:49:41 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:41 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain
Nov 23 04:49:41 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:41 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:41 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:42 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)...
Nov 23 04:49:42 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain
Nov 23 04:49:42 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:42 localhost ceph-mon[289043]: Added label _no_schedule to host np0005532581.localdomain
Nov 23 04:49:42 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:42 localhost ceph-mon[289043]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532581.localdomain
Nov 23 04:49:42 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:42 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:42 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:43 localhost ceph-mon[289043]: Reconfiguring mon.np0005532583 (monmap changed)...
Nov 23 04:49:43 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:49:43 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:43 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:43 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:43 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:44 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:49:44 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain"} : dispatch
Nov 23 04:49:44 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain"}]': finished
Nov 23 04:49:45 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:45 localhost sshd[291992]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:49:45 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:49:45 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:49:45 localhost ceph-mon[289043]: Removed host np0005532581.localdomain
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:45 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:45 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:45 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:49:46 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:49:46 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:49:46 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:46 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:46 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:49:47 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:49:47 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:49:47 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:47 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:47 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:48 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:49:48 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:49:48 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:48 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:48 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:49 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:49:49 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:49:49 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:49 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:49 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:49:50 localhost sshd[292020]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:49:50 localhost podman[291996]: 2025-11-23 09:49:50.189462269 +0000 UTC m=+0.089131592 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:49:50 localhost podman[291996]: 2025-11-23 09:49:50.227940046 +0000 UTC m=+0.127609399 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:49:50 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:49:50 localhost systemd[1]: tmp-crun.qe7MGi.mount: Deactivated successfully.
Nov 23 04:49:50 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:49:50 localhost podman[291995]: 2025-11-23 09:49:50.247231172 +0000 UTC m=+0.148280603 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:49:50 localhost podman[291994]: 2025-11-23 09:49:50.291305959 +0000 UTC m=+0.192735970 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git)
Nov 23 04:49:50 localhost podman[291995]: 2025-11-23 09:49:50.315837195 +0000 UTC m=+0.216886596 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 23 04:49:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:49:50 localhost podman[291994]: 2025-11-23 09:49:50.33247863 +0000 UTC m=+0.233908631 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:49:50 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:49:50 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:49:50 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:49:50 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:50 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:50 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:49:51 localhost systemd[1]: tmp-crun.iYy4yQ.mount: Deactivated successfully.
Nov 23 04:49:51 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:49:51 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:49:51 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:51 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:51 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: ERROR   09:49:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: ERROR   09:49:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: ERROR   09:49:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: ERROR   09:49:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: ERROR   09:49:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:49:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:49:52 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:49:52 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:49:52 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:52 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:52 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:49:52 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:53 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:49:53 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:49:53 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:49:53 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:53 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:53 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:49:54 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:49:54 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:49:54 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:54 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:54 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:49:54 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:49:55 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf080 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Nov 23 04:49:55 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:49:55 localhost ceph-mon[289043]: paxos.2).electionLogic(28) init, last seen epoch 28
Nov 23 04:49:55 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:49:55 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:00 localhost podman[292076]: 2025-11-23 09:50:00.17941987 +0000 UTC m=+0.083698457 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:50:00 localhost podman[292076]: 2025-11-23 09:50:00.185148073 +0000 UTC m=+0.089426700 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 23 04:50:00 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:50:00 localhost ceph-mon[289043]: Remove daemons mon.np0005532584
Nov 23 04:50:00 localhost ceph-mon[289043]: Safe to remove mon.np0005532584: new quorum should be ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585'] (from ['np0005532583', 'np0005532582', 'np0005532586', 'np0005532585'])
Nov 23 04:50:00 localhost ceph-mon[289043]: Removing monitor np0005532584 from monmap...
Nov 23 04:50:00 localhost ceph-mon[289043]: Removing daemon mon.np0005532584 from np0005532584.localdomain -- ports []
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:50:00 localhost ceph-mon[289043]: Reconfiguring crash.np0005532582 (monmap changed)...
Nov 23 04:50:00 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532585 in quorum (ranks 0,2,3)
Nov 23 04:50:00 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585 in quorum (ranks 0,1,2,3)
Nov 23 04:50:00 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:50:00 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:00 localhost podman[292077]: 2025-11-23 09:50:00.230190096 +0000 UTC m=+0.132630455 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:50:00 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:00 localhost podman[292077]: 2025-11-23 09:50:00.244914959 +0000 UTC m=+0.147355358 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:50:00 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:50:01 localhost podman[292118]: 2025-11-23 09:50:01.170763234 +0000 UTC m=+0.078425946 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 04:50:01 localhost podman[292118]: 2025-11-23 09:50:01.210901926 +0000 UTC m=+0.118564628 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 23 04:50:01 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:50:01 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain
Nov 23 04:50:01 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:01 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:01 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.045 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.045 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.046 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.046 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.047 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:50:02 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)...
Nov 23 04:50:02 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain
Nov 23 04:50:02 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:02 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:02 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:02 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:50:02 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3149325585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.507 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.703 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.705 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12026MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.705 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.706 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.797 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.798 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:50:02 localhost nova_compute[281613]: 2025-11-23 09:50:02.831 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:50:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:50:03 localhost systemd[1]: tmp-crun.afHS6Y.mount: Deactivated successfully.
Nov 23 04:50:03 localhost podman[292185]: 2025-11-23 09:50:03.186711709 +0000 UTC m=+0.094877125 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 04:50:03 localhost podman[292185]: 2025-11-23 09:50:03.195952616 +0000 UTC m=+0.104118052 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 23 04:50:03 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:50:03 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:50:03 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1371552442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:50:03 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:50:03 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:50:03 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:03 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:03 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:03 localhost nova_compute[281613]: 2025-11-23 09:50:03.271 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:50:03 localhost nova_compute[281613]: 2025-11-23 09:50:03.278 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:50:03 localhost nova_compute[281613]: 2025-11-23 09:50:03.293 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:50:03 localhost nova_compute[281613]: 2025-11-23 09:50:03.296 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:50:03 localhost nova_compute[281613]: 2025-11-23 09:50:03.296 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:50:04 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:50:04 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:50:04 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:04 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:04 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:50:04 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:04 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.296 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.297 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.297 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.313 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.313 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:04 localhost nova_compute[281613]: 2025-11-23 09:50:04.313 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:05 localhost nova_compute[281613]: 2025-11-23 09:50:05.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:05 localhost nova_compute[281613]: 2025-11-23 09:50:05.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:05 localhost nova_compute[281613]: 2025-11-23 09:50:05.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:50:05 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:05 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:05 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:05 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:50:05 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:50:05 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:50:06 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:06 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:06 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:50:06 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:50:06 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:07 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:07 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:07 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:08 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:50:08 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:50:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:50:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:08 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:50:09.255 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:50:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:50:09.255 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:50:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:50:09.256 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:50:09 localhost ceph-mon[289043]: Deploying daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:50:09 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:50:09 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:50:09 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:09 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:09 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:50:10 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:10 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:50:10 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:50:10 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:10 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:10 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:50:11 localhost podman[240144]: time="2025-11-23T09:50:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:50:11 localhost podman[240144]: @ - - [23/Nov/2025:09:50:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:50:11 localhost podman[240144]: @ - - [23/Nov/2025:09:50:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19166 "" "Go-http-client/1.1"
Nov 23 04:50:11 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:50:11 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:50:11 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:11 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:11 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:12 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Nov 23 04:50:12 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Nov 23 04:50:12 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Nov 23 04:50:12 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Nov 23 04:50:12 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:50:12 localhost ceph-mon[289043]: paxos.2).electionLogic(34) init, last seen epoch 34
Nov 23 04:50:12 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:12 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:17 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: paxos.2).electionLogic(37) init, last seen epoch 37, mid-election, bumping
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532586@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:50:17 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:50:17 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586 in quorum (ranks 0,1,2)
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532582 calling monitor election
Nov 23 04:50:17 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532582,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3,4)
Nov 23 04:50:17 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:50:17 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:17 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:17 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:18 localhost podman[292261]: 
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.226044835 +0000 UTC m=+0.079302460 container create b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, ceph=True, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:50:18 localhost systemd[1]: Started libpod-conmon-b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94.scope.
Nov 23 04:50:18 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.194880842 +0000 UTC m=+0.048138487 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.292377916 +0000 UTC m=+0.145635541 container init b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, version=7, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.308408725 +0000 UTC m=+0.161666350 container start b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.308691772 +0000 UTC m=+0.161949467 container attach b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:50:18 localhost distracted_heisenberg[292276]: 167 167
Nov 23 04:50:18 localhost systemd[1]: libpod-b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94.scope: Deactivated successfully.
Nov 23 04:50:18 localhost podman[292261]: 2025-11-23 09:50:18.314443806 +0000 UTC m=+0.167701441 container died b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True)
Nov 23 04:50:18 localhost podman[292281]: 2025-11-23 09:50:18.409398723 +0000 UTC m=+0.080530093 container remove b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_heisenberg, release=553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:50:18 localhost systemd[1]: libpod-conmon-b7cbd94c038b0face1773a651e7c84680a176a9ecb8ebea275ff648ddde68f94.scope: Deactivated successfully.
Nov 23 04:50:18 localhost ceph-mon[289043]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:50:18 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:50:19 localhost systemd[1]: var-lib-containers-storage-overlay-bace86b68b3f4e82fb36c01475a89c7fb1731f8794b909867ebbaf2598d5c028-merged.mount: Deactivated successfully.
Nov 23 04:50:19 localhost podman[292349]: 
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.380836385 +0000 UTC m=+0.069055806 container create 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc.)
Nov 23 04:50:19 localhost systemd[1]: Started libpod-conmon-50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0.scope.
Nov 23 04:50:19 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.437961251 +0000 UTC m=+0.126180672 container init 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.448126822 +0000 UTC m=+0.136346273 container start 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.448437041 +0000 UTC m=+0.136656462 container attach 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=553, version=7, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, RELEASE=main)
Nov 23 04:50:19 localhost busy_williams[292365]: 167 167
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.356421852 +0000 UTC m=+0.044641273 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:19 localhost systemd[1]: libpod-50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0.scope: Deactivated successfully.
Nov 23 04:50:19 localhost podman[292349]: 2025-11-23 09:50:19.460489842 +0000 UTC m=+0.148709293 container died 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.expose-services=, RELEASE=main, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:50:19 localhost podman[292370]: 2025-11-23 09:50:19.550474486 +0000 UTC m=+0.077523451 container remove 50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_williams, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, version=7, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:50:19 localhost systemd[1]: libpod-conmon-50fbb7cbd12ec3270a68687d26e0e0cc5ee2ae860b8ebb83c2602473ae38d1e0.scope: Deactivated successfully.
Nov 23 04:50:19 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:19 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:19 localhost ceph-mon[289043]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:50:19 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:50:19 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:50:19 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:19 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:20 localhost systemd[1]: var-lib-containers-storage-overlay-f466b498e6d2ad34145639005020bc4b593a6d37e1503678423acde638512e17-merged.mount: Deactivated successfully.
Nov 23 04:50:20 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:50:20 localhost podman[292444]: 2025-11-23 09:50:20.400274149 +0000 UTC m=+0.088488705 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:50:20 localhost podman[292452]: 
Nov 23 04:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:50:20 localhost podman[292444]: 2025-11-23 09:50:20.421395323 +0000 UTC m=+0.109609889 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.423718075 +0000 UTC m=+0.086878272 container create ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:50:20 localhost systemd[1]: Started libpod-conmon-ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3.scope.
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.386013458 +0000 UTC m=+0.049173695 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:20 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.505022857 +0000 UTC m=+0.168183034 container init ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, ceph=True, architecture=x86_64, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:50:20 localhost podman[292483]: 2025-11-23 09:50:20.509329233 +0000 UTC m=+0.092124393 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.515419205 +0000 UTC m=+0.178579402 container start ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, release=553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.515923928 +0000 UTC m=+0.179084125 container attach ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph)
Nov 23 04:50:20 localhost relaxed_mendeleev[292507]: 167 167
Nov 23 04:50:20 localhost systemd[1]: libpod-ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3.scope: Deactivated successfully.
Nov 23 04:50:20 localhost podman[292452]: 2025-11-23 09:50:20.518723393 +0000 UTC m=+0.181883590 container died ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, release=553)
Nov 23 04:50:20 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:50:20 localhost podman[292483]: 2025-11-23 09:50:20.549760512 +0000 UTC m=+0.132555672 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Nov 23 04:50:20 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:50:20 localhost podman[292519]: 2025-11-23 09:50:20.616321771 +0000 UTC m=+0.084628542 container remove ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_mendeleev, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55)
Nov 23 04:50:20 localhost systemd[1]: libpod-conmon-ac88dfcd1ba79c03320ae3a9937370dc2ca33bed0edb36d77dc68d8832c90aa3.scope: Deactivated successfully.
Nov 23 04:50:20 localhost podman[292484]: 2025-11-23 09:50:20.665376341 +0000 UTC m=+0.244485382 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:50:20 localhost podman[292484]: 2025-11-23 09:50:20.701325411 +0000 UTC m=+0.280434452 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:50:20 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:50:20 localhost ceph-mon[289043]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:50:20 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:50:20 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:50:21 localhost systemd[1]: var-lib-containers-storage-overlay-d4ff24fddc26dec1b04bfcafc9efdda517e106c79ab6195a070a81873172c288-merged.mount: Deactivated successfully.
Nov 23 04:50:21 localhost podman[292602]: 
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.443488438 +0000 UTC m=+0.078734504 container create ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, release=553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:50:21 localhost systemd[1]: Started libpod-conmon-ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd.scope.
Nov 23 04:50:21 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.505180176 +0000 UTC m=+0.140426232 container init ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, distribution-scope=public, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.412147861 +0000 UTC m=+0.047393937 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.515174754 +0000 UTC m=+0.150420810 container start ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7)
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.515967765 +0000 UTC m=+0.151213871 container attach ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., vcs-type=git)
Nov 23 04:50:21 localhost peaceful_buck[292617]: 167 167
Nov 23 04:50:21 localhost systemd[1]: libpod-ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd.scope: Deactivated successfully.
Nov 23 04:50:21 localhost podman[292602]: 2025-11-23 09:50:21.519713324 +0000 UTC m=+0.154959420 container died ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55)
Nov 23 04:50:21 localhost podman[292623]: 2025-11-23 09:50:21.609706549 +0000 UTC m=+0.083666046 container remove ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_buck, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, ceph=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, release=553, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git)
Nov 23 04:50:21 localhost systemd[1]: libpod-conmon-ce31eb38af9c5d334447526c9750ac9997ab40e3dd8c311740f6bed83a6b15bd.scope: Deactivated successfully.
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:21 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:21 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:21 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:22 localhost systemd[1]: var-lib-containers-storage-overlay-eb4a83abdb1146db95abf381a9e4c73b4e8e74afb0aa20fbb79355a5c1ca6689-merged.mount: Deactivated successfully.
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: ERROR   09:50:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: ERROR   09:50:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: ERROR   09:50:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:50:22 localhost podman[292692]: 
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.308199159 +0000 UTC m=+0.077416169 container create 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64, version=7, io.openshift.expose-services=, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: ERROR   09:50:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: ERROR   09:50:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:50:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:50:22 localhost systemd[1]: Started libpod-conmon-79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753.scope.
Nov 23 04:50:22 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.374784388 +0000 UTC m=+0.144001398 container init 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=)
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.276980275 +0000 UTC m=+0.046197335 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:22 localhost systemd[1]: tmp-crun.A83GO2.mount: Deactivated successfully.
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.390952449 +0000 UTC m=+0.160169489 container start 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.391434963 +0000 UTC m=+0.160651973 container attach 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64)
Nov 23 04:50:22 localhost angry_williams[292707]: 167 167
Nov 23 04:50:22 localhost systemd[1]: libpod-79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753.scope: Deactivated successfully.
Nov 23 04:50:22 localhost podman[292692]: 2025-11-23 09:50:22.395701947 +0000 UTC m=+0.164919017 container died 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, distribution-scope=public)
Nov 23 04:50:22 localhost podman[292712]: 2025-11-23 09:50:22.476494535 +0000 UTC m=+0.071425410 container remove 79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_williams, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:50:22 localhost systemd[1]: libpod-conmon-79e3937e59cf76f98fb5b5b092c3fd27a008f4eb1c04c04fd37cc6f584a36753.scope: Deactivated successfully.
Nov 23 04:50:22 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:50:22 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:50:22 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:22 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:23 localhost systemd[1]: var-lib-containers-storage-overlay-e31fd9324af454944558139a34c1a7e1ac345ae8a6ddfd3bd94ccaf716d3ad53-merged.mount: Deactivated successfully.
Nov 23 04:50:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:24 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:25 localhost ceph-mon[289043]: Reconfig service osd.default_drive_group
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:25 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 e83: 6 total, 6 up, 6 in
Nov 23 04:50:26 localhost systemd[1]: session-64.scope: Deactivated successfully.
Nov 23 04:50:26 localhost systemd[1]: session-64.scope: Consumed 18.123s CPU time.
Nov 23 04:50:26 localhost systemd-logind[761]: Session 64 logged out. Waiting for processes to exit.
Nov 23 04:50:26 localhost systemd-logind[761]: Removed session 64.
Nov 23 04:50:26 localhost sshd[293115]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: from='client.? 172.18.0.200:0/3357125401' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: Activating manager daemon np0005532582.gilwrz
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14190 172.18.0.105:0/1209981642' entity='mgr.np0005532583.orhywt' 
Nov 23 04:50:26 localhost ceph-mon[289043]: Manager daemon np0005532582.gilwrz is now available
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"}]': finished
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"} : dispatch
Nov 23 04:50:26 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532581.localdomain.devices.0"}]': finished
Nov 23 04:50:27 localhost sshd[293117]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:50:27 localhost systemd-logind[761]: New session 65 of user ceph-admin.
Nov 23 04:50:27 localhost systemd[1]: Started Session 65 of User ceph-admin.
Nov 23 04:50:27 localhost ceph-mon[289043]: removing stray HostCache host record np0005532581.localdomain.devices.0
Nov 23 04:50:27 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/mirror_snapshot_schedule"} : dispatch
Nov 23 04:50:27 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/mirror_snapshot_schedule"} : dispatch
Nov 23 04:50:27 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/trash_purge_schedule"} : dispatch
Nov 23 04:50:27 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532582.gilwrz/trash_purge_schedule"} : dispatch
Nov 23 04:50:28 localhost podman[293230]: 2025-11-23 09:50:28.225566881 +0000 UTC m=+0.095980505 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64)
Nov 23 04:50:28 localhost podman[293230]: 2025-11-23 09:50:28.341420936 +0000 UTC m=+0.211834510 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:50:28 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:28 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:28 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:28 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:28 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:29 localhost ceph-mon[289043]: [23/Nov/2025:09:50:28] ENGINE Bus STARTING
Nov 23 04:50:29 localhost ceph-mon[289043]: [23/Nov/2025:09:50:28] ENGINE Serving on https://172.18.0.104:7150
Nov 23 04:50:29 localhost ceph-mon[289043]: [23/Nov/2025:09:50:28] ENGINE Client ('172.18.0.104', 33588) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:50:29 localhost ceph-mon[289043]: [23/Nov/2025:09:50:28] ENGINE Serving on http://172.18.0.104:8765
Nov 23 04:50:29 localhost ceph-mon[289043]: [23/Nov/2025:09:50:28] ENGINE Bus STARTED
Nov 23 04:50:29 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:29 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:29 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:29 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:29 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:30 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:50:30 localhost podman[293489]: 2025-11-23 09:50:30.860576585 +0000 UTC m=+0.074811570 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 23 04:50:30 localhost podman[293489]: 2025-11-23 09:50:30.895640671 +0000 UTC m=+0.109875736 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:50:30 localhost systemd[1]: tmp-crun.NKfXZr.mount: Deactivated successfully.
Nov 23 04:50:30 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:50:30 localhost podman[293490]: 2025-11-23 09:50:30.922418557 +0000 UTC m=+0.133313723 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:50:30 localhost podman[293490]: 2025-11-23 09:50:30.931612292 +0000 UTC m=+0.142507458 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:50:30 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:50:31 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:50:31 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:50:31 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:50:31 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:31 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:31 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:31 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:31 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:50:31 localhost podman[293636]: 2025-11-23 09:50:31.347630177 +0000 UTC m=+0.082750362 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Nov 23 04:50:31 localhost podman[293636]: 2025-11-23 09:50:31.412012987 +0000 UTC m=+0.147133132 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:50:31 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:32 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:33 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:33 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:33 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:50:33 localhost podman[294139]: 2025-11-23 09:50:33.392845965 +0000 UTC m=+0.085643529 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:50:33 localhost podman[294139]: 2025-11-23 09:50:33.430823419 +0000 UTC m=+0.123620953 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:50:33 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:34 localhost ceph-mon[289043]: Reconfiguring crash.np0005532582 (monmap changed)...
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:34 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:34 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.861930) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434862025, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2653, "num_deletes": 255, "total_data_size": 8170835, "memory_usage": 8738272, "flush_reason": "Manual Compaction"}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434891885, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4911183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12465, "largest_seqno": 15113, "table_properties": {"data_size": 4900044, "index_size": 6876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 28832, "raw_average_key_size": 22, "raw_value_size": 4875651, "raw_average_value_size": 3854, "num_data_blocks": 297, "num_entries": 1265, "num_filter_entries": 1265, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891379, "oldest_key_time": 1763891379, "file_creation_time": 1763891434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 30010 microseconds, and 10572 cpu microseconds.
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.891949) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4911183 bytes OK
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.891977) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.893589) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.893610) EVENT_LOG_v1 {"time_micros": 1763891434893604, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.893636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 8157961, prev total WAL file size 8157961, number of live WAL files 2.
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.895380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4796KB)], [18(11MB)]
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434895467, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17124903, "oldest_snapshot_seqno": -1}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10115 keys, 15126579 bytes, temperature: kUnknown
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434979671, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 15126579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15066269, "index_size": 33905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 270427, "raw_average_key_size": 26, "raw_value_size": 14890471, "raw_average_value_size": 1472, "num_data_blocks": 1306, "num_entries": 10115, "num_filter_entries": 10115, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891434, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.980062) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15126579 bytes
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.981602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.1 rd, 179.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.7, 11.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 10667, records dropped: 552 output_compression: NoCompression
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.981635) EVENT_LOG_v1 {"time_micros": 1763891434981619, "job": 8, "event": "compaction_finished", "compaction_time_micros": 84333, "compaction_time_cpu_micros": 41825, "output_level": 6, "num_output_files": 1, "total_output_size": 15126579, "num_input_records": 10667, "num_output_records": 10115, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434982521, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891434984437, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.895251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.984473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.984479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.984482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.984485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:34 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:34.984488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:35 localhost ceph-mon[289043]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Nov 23 04:50:35 localhost ceph-mon[289043]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Nov 23 04:50:35 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:35 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:35 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532582.gilwrz (monmap changed)...
Nov 23 04:50:35 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:35 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532582.gilwrz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:35 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532582.gilwrz on np0005532582.localdomain
Nov 23 04:50:35 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:36 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:36 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:36 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:38 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:50:38 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:50:38 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:38 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:38 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:38 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:38 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:39 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:50:39 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:50:39 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:39 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:39 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.847853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439847915, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 459, "num_deletes": 254, "total_data_size": 475331, "memory_usage": 485704, "flush_reason": "Manual Compaction"}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439852212, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 290260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15118, "largest_seqno": 15572, "table_properties": {"data_size": 287465, "index_size": 842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6400, "raw_average_key_size": 18, "raw_value_size": 281734, "raw_average_value_size": 795, "num_data_blocks": 33, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891435, "oldest_key_time": 1763891435, "file_creation_time": 1763891439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 4403 microseconds, and 2053 cpu microseconds.
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.852262) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 290260 bytes OK
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.852286) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.854031) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.854051) EVENT_LOG_v1 {"time_micros": 1763891439854045, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.854074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 472392, prev total WAL file size 472392, number of live WAL files 2.
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.854685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303130' seq:72057594037927935, type:22 .. '6B760031323635' seq:0, type:0; will stop at (end)
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(283KB)], [21(14MB)]
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439854739, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 15416839, "oldest_snapshot_seqno": -1}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 9940 keys, 14425171 bytes, temperature: kUnknown
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439940839, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14425171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14367045, "index_size": 32169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 268396, "raw_average_key_size": 27, "raw_value_size": 14195199, "raw_average_value_size": 1428, "num_data_blocks": 1215, "num_entries": 9940, "num_filter_entries": 9940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.941176) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14425171 bytes
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.942791) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(102.8) write-amplify(49.7) OK, records in: 10469, records dropped: 529 output_compression: NoCompression
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.942836) EVENT_LOG_v1 {"time_micros": 1763891439942817, "job": 10, "event": "compaction_finished", "compaction_time_micros": 86234, "compaction_time_cpu_micros": 32273, "output_level": 6, "num_output_files": 1, "total_output_size": 14425171, "num_input_records": 10469, "num_output_records": 9940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439943057, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891439945328, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.854606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.945426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.945434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.945437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.945440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:50:39.945443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:50:40 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:50:40 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:50:40 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:40 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:40 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:40 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:40 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:50:40 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:50:40 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:50:40 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:41 localhost podman[240144]: time="2025-11-23T09:50:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:50:41 localhost podman[240144]: @ - - [23/Nov/2025:09:50:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:50:41 localhost podman[240144]: @ - - [23/Nov/2025:09:50:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19172 "" "Go-http-client/1.1"
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:41 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:41 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:42 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:50:42 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:50:42 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:50:42 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:42 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:42 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:50:43 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:50:43 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:50:43 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:43 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:43 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:43 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:50:44 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 04:50:44 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/94382026' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Nov 23 04:50:44 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:50:44 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:50:44 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:44 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:44 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:50:45 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:45 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:50:45 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:50:45 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:45 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:45 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:45 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:45 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:50:46 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:50:46 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:46 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:50:47 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:50:47 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:50:47 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:47 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:47 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:47 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:50:48 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:50:48 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:48 localhost ceph-mon[289043]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:50:48 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:48 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:50:49 localhost podman[294266]: 
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.472175655 +0000 UTC m=+0.078983849 container create bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=553, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.33.12, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container)
Nov 23 04:50:49 localhost systemd[1]: Started libpod-conmon-bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8.scope.
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.437438294 +0000 UTC m=+0.044246478 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:49 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.556771145 +0000 UTC m=+0.163579329 container init bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, release=553, version=7, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.569738642 +0000 UTC m=+0.176546826 container start bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, ceph=True, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.570008619 +0000 UTC m=+0.176816853 container attach bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55)
Nov 23 04:50:49 localhost nostalgic_ride[294281]: 167 167
Nov 23 04:50:49 localhost systemd[1]: libpod-bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8.scope: Deactivated successfully.
Nov 23 04:50:49 localhost podman[294266]: 2025-11-23 09:50:49.573223696 +0000 UTC m=+0.180031930 container died bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git)
Nov 23 04:50:49 localhost podman[294287]: 2025-11-23 09:50:49.671010688 +0000 UTC m=+0.085200365 container remove bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_ride, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:50:49 localhost systemd[1]: libpod-conmon-bb4954eeb6361adba6c5c67ad7453c6cef3602527445e3a1199f2209510a89a8.scope: Deactivated successfully.
Nov 23 04:50:49 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:50:50 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:50 localhost systemd[1]: var-lib-containers-storage-overlay-be2b1b9b6f982a627f12ca42aa0a9bbb17396792a8c1f993dca0b7ca23524509-merged.mount: Deactivated successfully.
Nov 23 04:50:50 localhost podman[294362]: 
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.625961853 +0000 UTC m=+0.073744949 container create ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12)
Nov 23 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:50:50 localhost systemd[1]: Started libpod-conmon-ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18.scope.
Nov 23 04:50:50 localhost systemd[1]: Started libcrun container.
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.595603999 +0000 UTC m=+0.043387115 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.70712749 +0000 UTC m=+0.154910586 container init ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=)
Nov 23 04:50:50 localhost kind_kirch[294379]: 167 167
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.719515702 +0000 UTC m=+0.167298808 container start ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, GIT_BRANCH=main, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph)
Nov 23 04:50:50 localhost systemd[1]: libpod-ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18.scope: Deactivated successfully.
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.720058547 +0000 UTC m=+0.167841643 container attach ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, release=553, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vendor=Red Hat, Inc., vcs-type=git, version=7)
Nov 23 04:50:50 localhost podman[294362]: 2025-11-23 09:50:50.723579811 +0000 UTC m=+0.171362937 container died ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, release=553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public)
Nov 23 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:50:50 localhost podman[294378]: 2025-11-23 09:50:50.810147413 +0000 UTC m=+0.142287838 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:50:50 localhost podman[294378]: 2025-11-23 09:50:50.842404619 +0000 UTC m=+0.174545084 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:50:50 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:50:50 localhost podman[294409]: 2025-11-23 09:50:50.863060532 +0000 UTC m=+0.119147287 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, config_id=edpm)
Nov 23 04:50:50 localhost podman[294409]: 2025-11-23 09:50:50.871474698 +0000 UTC m=+0.127561503 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:50:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:50:50 localhost podman[294377]: 2025-11-23 09:50:50.782577733 +0000 UTC m=+0.114613345 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc.)
Nov 23 04:50:50 localhost podman[294377]: 2025-11-23 09:50:50.913236308 +0000 UTC m=+0.245271870 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git)
Nov 23 04:50:50 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:50:50 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:50 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:50 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:50 localhost ceph-mon[289043]: from='mgr.14196 ' entity='mgr.np0005532582.gilwrz' 
Nov 23 04:50:50 localhost ceph-mon[289043]: from='mgr.14196 172.18.0.104:0/3739487529' entity='mgr.np0005532582.gilwrz' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:50:50 localhost podman[294403]: 2025-11-23 09:50:50.976599558 +0000 UTC m=+0.247153831 container remove ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_kirch, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:50:50 localhost systemd[1]: libpod-conmon-ea087847f8c661dbe8b44570d291ae91496f787e7e21906305bf16712e825e18.scope: Deactivated successfully.
Nov 23 04:50:51 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e84 e84: 6 total, 6 up, 6 in
Nov 23 04:50:51 localhost systemd-logind[761]: Session 65 logged out. Waiting for processes to exit.
Nov 23 04:50:51 localhost systemd[1]: session-65.scope: Deactivated successfully.
Nov 23 04:50:51 localhost systemd[1]: session-65.scope: Consumed 7.556s CPU time.
Nov 23 04:50:51 localhost systemd-logind[761]: Removed session 65.
Nov 23 04:50:51 localhost sshd[294468]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:50:51 localhost systemd-logind[761]: New session 66 of user ceph-admin.
Nov 23 04:50:51 localhost systemd[1]: Started Session 66 of User ceph-admin.
Nov 23 04:50:51 localhost systemd[1]: var-lib-containers-storage-overlay-a2d334246953194c71e408a3c21bc92e64575c5706f98fe98d9335fbd68d7a4b-merged.mount: Deactivated successfully.
Nov 23 04:50:51 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:50:51 localhost ceph-mon[289043]: from='client.? 172.18.0.200:0/3363667457' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:50:51 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:50:51 localhost ceph-mon[289043]: Activating manager daemon np0005532584.naxwxy
Nov 23 04:50:51 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:50:51 localhost ceph-mon[289043]: Manager daemon np0005532584.naxwxy is now available
Nov 23 04:50:51 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} : dispatch
Nov 23 04:50:51 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} : dispatch
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: ERROR   09:50:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: ERROR   09:50:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: ERROR   09:50:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: ERROR   09:50:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: ERROR   09:50:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:50:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:50:52 localhost systemd[1]: tmp-crun.myXfYK.mount: Deactivated successfully.
Nov 23 04:50:52 localhost podman[294577]: 2025-11-23 09:50:52.511370134 +0000 UTC m=+0.103058195 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:50:52 localhost podman[294577]: 2025-11-23 09:50:52.624128629 +0000 UTC m=+0.215816680 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:50:54 localhost ceph-mon[289043]: [23/Nov/2025:09:50:52] ENGINE Bus STARTING
Nov 23 04:50:54 localhost ceph-mon[289043]: [23/Nov/2025:09:50:52] ENGINE Serving on https://172.18.0.106:7150
Nov 23 04:50:54 localhost ceph-mon[289043]: [23/Nov/2025:09:50:52] ENGINE Client ('172.18.0.106', 58208) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:50:54 localhost ceph-mon[289043]: [23/Nov/2025:09:50:52] ENGINE Serving on http://172.18.0.106:8765
Nov 23 04:50:54 localhost ceph-mon[289043]: [23/Nov/2025:09:50:52] ENGINE Bus STARTED
Nov 23 04:50:54 localhost ceph-mon[289043]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Nov 23 04:50:54 localhost ceph-mon[289043]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Nov 23 04:50:54 localhost ceph-mon[289043]: Cluster is now healthy
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd/host:np0005532582", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:50:55 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:50:55 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:50:55 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:50:55 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:50:55 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:50:55 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:55 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:55 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:55 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:55 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:50:56 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:56 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:56 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:56 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:56 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:50:58 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:50:58 localhost ceph-mon[289043]: Reconfiguring mon.np0005532582 (monmap changed)...
Nov 23 04:50:58 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532582 on np0005532582.localdomain
Nov 23 04:51:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:00 localhost ceph-mon[289043]: Reconfiguring mon.np0005532583 (monmap changed)...
Nov 23 04:51:00 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:51:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:00 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:00 localhost podman[295526]: 
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.659513506 +0000 UTC m=+0.072515556 container create ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:51:00 localhost systemd[1]: Started libpod-conmon-ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12.scope.
Nov 23 04:51:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.726962295 +0000 UTC m=+0.139964335 container init ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.631449253 +0000 UTC m=+0.044451333 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.73721348 +0000 UTC m=+0.150215530 container start ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.33.12, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7)
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.737499548 +0000 UTC m=+0.150501638 container attach ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, version=7, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public)
Nov 23 04:51:00 localhost agitated_wu[295541]: 167 167
Nov 23 04:51:00 localhost systemd[1]: libpod-ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12.scope: Deactivated successfully.
Nov 23 04:51:00 localhost podman[295526]: 2025-11-23 09:51:00.739913973 +0000 UTC m=+0.152916043 container died ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True)
Nov 23 04:51:00 localhost podman[295546]: 2025-11-23 09:51:00.832123706 +0000 UTC m=+0.084373465 container remove ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_wu, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, release=553, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, distribution-scope=public)
Nov 23 04:51:00 localhost systemd[1]: libpod-conmon-ce3d3a30b76e36c68f1b71f76fdda8d6e30b1bd7cb6a3b88ad43582ac70d8e12.scope: Deactivated successfully.
Nov 23 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:51:01 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:01 localhost podman[295569]: 2025-11-23 09:51:01.186625934 +0000 UTC m=+0.089841001 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 23 04:51:01 localhost podman[295569]: 2025-11-23 09:51:01.217946384 +0000 UTC m=+0.121161401 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 04:51:01 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:51:01 localhost podman[295572]: 2025-11-23 09:51:01.28973139 +0000 UTC m=+0.193144772 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:51:01 localhost podman[295572]: 2025-11-23 09:51:01.297791326 +0000 UTC m=+0.201204738 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:51:01 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:51:01 localhost systemd[1]: tmp-crun.nmQZP6.mount: Deactivated successfully.
Nov 23 04:51:01 localhost systemd[1]: var-lib-containers-storage-overlay-b99e4bd0ddb3a9bec1a24684fc3c96aeb7788e3a192c7f0b9604b1ee75cde96a-merged.mount: Deactivated successfully.
Nov 23 04:51:01 localhost podman[295659]: 2025-11-23 09:51:01.673647857 +0000 UTC m=+0.088911526 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:51:01 localhost podman[295667]: 
Nov 23 04:51:01 localhost podman[295659]: 2025-11-23 09:51:01.74009521 +0000 UTC m=+0.155358849 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 04:51:01 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.796376249 +0000 UTC m=+0.189404522 container create c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.714805461 +0000 UTC m=+0.107833754 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:01 localhost systemd[1]: Started libpod-conmon-c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114.scope.
Nov 23 04:51:01 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.86503693 +0000 UTC m=+0.258065203 container init c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, com.redhat.component=rhceph-container)
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.875613504 +0000 UTC m=+0.268641767 container start c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.875812149 +0000 UTC m=+0.268840412 container attach c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.expose-services=, release=553, GIT_BRANCH=main, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:51:01 localhost bold_ptolemy[295699]: 167 167
Nov 23 04:51:01 localhost systemd[1]: libpod-c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114.scope: Deactivated successfully.
Nov 23 04:51:01 localhost podman[295667]: 2025-11-23 09:51:01.87919783 +0000 UTC m=+0.272226123 container died c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Nov 23 04:51:01 localhost podman[295704]: 2025-11-23 09:51:01.970349165 +0000 UTC m=+0.082852493 container remove c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_ptolemy, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=)
Nov 23 04:51:01 localhost systemd[1]: libpod-conmon-c37f558e1162c2bd6a1d01bf07f32dceeaffa8a86aefa73b31bd3e17b90a0114.scope: Deactivated successfully.
Nov 23 04:51:02 localhost ceph-mon[289043]: Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:51:02 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:51:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:51:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:02 localhost systemd[1]: var-lib-containers-storage-overlay-1c5d7d990aec3c0abcd3b377d699222a651ec20a48d214361ac39cc6a2ff29c4-merged.mount: Deactivated successfully.
Nov 23 04:51:03 localhost nova_compute[281613]: 2025-11-23 09:51:03.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:03 localhost nova_compute[281613]: 2025-11-23 09:51:03.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.034 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.034 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.035 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.035 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.036 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:51:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:51:04 localhost systemd[1]: tmp-crun.3MBfzk.mount: Deactivated successfully.
Nov 23 04:51:04 localhost podman[295740]: 2025-11-23 09:51:04.171065474 +0000 UTC m=+0.073798301 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:51:04 localhost podman[295740]: 2025-11-23 09:51:04.189127898 +0000 UTC m=+0.091860785 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 04:51:04 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.492 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.690 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.692 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11995MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.692 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.693 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.798 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.799 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:51:04 localhost nova_compute[281613]: 2025-11-23 09:51:04.815 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:51:05 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:51:05 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/159818063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:51:05 localhost ceph-mon[289043]: mon.np0005532586@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:05 localhost nova_compute[281613]: 2025-11-23 09:51:05.276 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:51:05 localhost nova_compute[281613]: 2025-11-23 09:51:05.283 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:51:05 localhost nova_compute[281613]: 2025-11-23 09:51:05.304 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:51:05 localhost nova_compute[281613]: 2025-11-23 09:51:05.307 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:51:05 localhost nova_compute[281613]: 2025-11-23 09:51:05.307 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.308 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.310 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.310 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.325 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.326 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:06 localhost nova_compute[281613]: 2025-11-23 09:51:06.327 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:07 localhost nova_compute[281613]: 2025-11-23 09:51:07.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:07 localhost nova_compute[281613]: 2025-11-23 09:51:07.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:51:07 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:07 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Nov 23 04:51:07 localhost ceph-mon[289043]: mon.np0005532586@2(peon) e10  my rank is now 1 (was 2)
Nov 23 04:51:07 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Nov 23 04:51:07 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Nov 23 04:51:07 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be50a6000 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0
Nov 23 04:51:07 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:51:07 localhost ceph-mon[289043]: paxos.1).electionLogic(40) init, last seen epoch 40
Nov 23 04:51:07 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:07 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:51:07 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.34373 172.18.0.107:0/920168224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:51:08 localhost sshd[295802]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:51:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:51:09.256 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:51:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:51:09.257 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:51:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:51:09.257 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:09 localhost ceph-mon[289043]: Remove daemons mon.np0005532582
Nov 23 04:51:09 localhost ceph-mon[289043]: Safe to remove mon.np0005532582: new quorum should be ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584'] (from ['np0005532583', 'np0005532586', 'np0005532585', 'np0005532584'])
Nov 23 04:51:09 localhost ceph-mon[289043]: Removing monitor np0005532582 from monmap...
Nov 23 04:51:09 localhost ceph-mon[289043]: Removing daemon mon.np0005532582 from np0005532582.localdomain -- ports []
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:51:09 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532585,np0005532584 in quorum (ranks 0,1,2,3)
Nov 23 04:51:09 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:51:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:51:10 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:11 localhost podman[240144]: time="2025-11-23T09:51:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:51:11 localhost podman[240144]: @ - - [23/Nov/2025:09:51:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:51:11 localhost podman[240144]: @ - - [23/Nov/2025:09:51:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19170 "" "Go-http-client/1.1"
Nov 23 04:51:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:11 localhost ceph-mon[289043]: Removed label mon from host np0005532582.localdomain
Nov 23 04:51:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:51:11 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:11 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:11 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:11 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:11 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: Updating np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: Removed label mgr from host np0005532582.localdomain
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:12 localhost ceph-mon[289043]: Removing daemon mgr.np0005532582.gilwrz from np0005532582.localdomain -- ports [8765]
Nov 23 04:51:14 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:14 localhost ceph-mon[289043]: Removed label _admin from host np0005532582.localdomain
Nov 23 04:51:15 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "mgr.np0005532582.gilwrz"} : dispatch
Nov 23 04:51:15 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005532582.gilwrz"}]': finished
Nov 23 04:51:15 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:15 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:15 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:16 localhost ceph-mon[289043]: Removing key for mgr.np0005532582.gilwrz
Nov 23 04:51:16 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:16 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:16 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:51:17 localhost ceph-mon[289043]: Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:17 localhost ceph-mon[289043]: Removing np0005532582.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:51:17 localhost ceph-mon[289043]: Removing np0005532582.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:51:17 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:17 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:17 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:17 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:17 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532582.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:18 localhost ceph-mon[289043]: Reconfiguring crash.np0005532582 (monmap changed)...
Nov 23 04:51:18 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532582 on np0005532582.localdomain
Nov 23 04:51:18 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:18 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:18 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:19 localhost ceph-mon[289043]: Reconfiguring mon.np0005532583 (monmap changed)...
Nov 23 04:51:19 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:51:19 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:19 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:19 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:20 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:51:20 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:51:20 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:20 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:20 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:20 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:51:21 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:51:21 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:51:21 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:21 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:21 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:21 localhost podman[296161]: 2025-11-23 09:51:21.182478469 +0000 UTC m=+0.083882001 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:51:21 localhost podman[296161]: 2025-11-23 09:51:21.222457841 +0000 UTC m=+0.123861353 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:51:21 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:51:21 localhost podman[296160]: 2025-11-23 09:51:21.240444974 +0000 UTC m=+0.144202689 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:51:21 localhost podman[296162]: 2025-11-23 09:51:21.291704829 +0000 UTC m=+0.187868100 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:51:21 localhost podman[296162]: 2025-11-23 09:51:21.303918836 +0000 UTC m=+0.200082137 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:51:21 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:51:21 localhost podman[296160]: 2025-11-23 09:51:21.357839153 +0000 UTC m=+0.261596878 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Nov 23 04:51:21 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:51:22 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:51:22 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:51:22 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:22 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:22 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: ERROR   09:51:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: ERROR   09:51:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: ERROR   09:51:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: ERROR   09:51:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: ERROR   09:51:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:51:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:51:23 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:51:23 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:51:23 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:23 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:23 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:51:23 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:51:23 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:51:24 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:51:24 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:24 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:25 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:25 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:51:25 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:51:25 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:25 localhost ceph-mon[289043]: Added label _no_schedule to host np0005532582.localdomain
Nov 23 04:51:25 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:25 localhost ceph-mon[289043]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532582.localdomain
Nov 23 04:51:25 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:25 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:25 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:26 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:51:26 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:51:26 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:26 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:26 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:28 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:51:28 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:51:28 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain"} : dispatch
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain"}]': finished
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:28 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:51:29 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:51:29 localhost ceph-mon[289043]: Removed host np0005532582.localdomain
Nov 23 04:51:29 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:51:29 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:51:29 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:29 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:29 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:51:30 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:51:30 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:51:30 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:30 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:30 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:30 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:31 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:51:31 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:51:31 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:31 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:31 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:31 localhost ceph-mon[289043]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:51:31 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:51:31 localhost podman[296241]: 2025-11-23 09:51:31.812823699 +0000 UTC m=+0.092746479 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:51:31 localhost podman[296241]: 2025-11-23 09:51:31.826001332 +0000 UTC m=+0.105924132 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:51:31 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:51:31 localhost podman[296282]: 2025-11-23 09:51:31.895105336 +0000 UTC m=+0.086342817 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:51:31 localhost podman[296282]: 2025-11-23 09:51:31.949767092 +0000 UTC m=+0.141004613 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:51:31 localhost podman[296240]: 2025-11-23 09:51:31.965639157 +0000 UTC m=+0.247291193 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:51:31 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:51:31 localhost podman[296240]: 2025-11-23 09:51:31.97393607 +0000 UTC m=+0.255588116 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:51:31 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:51:32 localhost podman[296340]: 
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.297859539 +0000 UTC m=+0.079369141 container create 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public)
Nov 23 04:51:32 localhost systemd[1]: Started libpod-conmon-91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd.scope.
Nov 23 04:51:32 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.266911859 +0000 UTC m=+0.048421491 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.373178699 +0000 UTC m=+0.154688311 container init 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.385885549 +0000 UTC m=+0.167395151 container start 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.386155957 +0000 UTC m=+0.167665599 container attach 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:51:32 localhost focused_tesla[296355]: 167 167
Nov 23 04:51:32 localhost systemd[1]: libpod-91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd.scope: Deactivated successfully.
Nov 23 04:51:32 localhost podman[296340]: 2025-11-23 09:51:32.390258007 +0000 UTC m=+0.171767609 container died 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vcs-type=git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main)
Nov 23 04:51:32 localhost podman[296360]: 2025-11-23 09:51:32.490485386 +0000 UTC m=+0.085522056 container remove 91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_tesla, io.openshift.expose-services=, GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, ceph=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main)
Nov 23 04:51:32 localhost systemd[1]: libpod-conmon-91fb94b257aa699e42c5a43a52fa1a01b95c7087baa13a2afa98a76a2db76bfd.scope: Deactivated successfully.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.602192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492602305, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2583, "num_deletes": 254, "total_data_size": 8106438, "memory_usage": 8660512, "flush_reason": "Manual Compaction"}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492629688, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 4896659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15577, "largest_seqno": 18155, "table_properties": {"data_size": 4885941, "index_size": 6583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27662, "raw_average_key_size": 22, "raw_value_size": 4862418, "raw_average_value_size": 3969, "num_data_blocks": 286, "num_entries": 1225, "num_filter_entries": 1225, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891440, "oldest_key_time": 1763891440, "file_creation_time": 1763891492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27560 microseconds, and 13693 cpu microseconds.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.629759) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 4896659 bytes OK
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.629788) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.632829) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.632852) EVENT_LOG_v1 {"time_micros": 1763891492632844, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.632878) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 8093887, prev total WAL file size 8098729, number of live WAL files 2.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.634819) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(4781KB)], [24(13MB)]
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492634871, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19321830, "oldest_snapshot_seqno": -1}
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:32 localhost ceph-mon[289043]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:51:32 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:32 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10613 keys, 16121443 bytes, temperature: kUnknown
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492724595, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16121443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16058704, "index_size": 35118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 284619, "raw_average_key_size": 26, "raw_value_size": 15875056, "raw_average_value_size": 1495, "num_data_blocks": 1346, "num_entries": 10613, "num_filter_entries": 10613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.725033) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16121443 bytes
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.727199) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.9 rd, 179.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.7, 13.8 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 11165, records dropped: 552 output_compression: NoCompression
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.727231) EVENT_LOG_v1 {"time_micros": 1763891492727216, "job": 12, "event": "compaction_finished", "compaction_time_micros": 89900, "compaction_time_cpu_micros": 43392, "output_level": 6, "num_output_files": 1, "total_output_size": 16121443, "num_input_records": 11165, "num_output_records": 10613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492728210, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891492730796, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.634717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.730868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.730876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.730879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.730882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:51:32.730884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:51:32 localhost systemd[1]: tmp-crun.Myet9O.mount: Deactivated successfully.
Nov 23 04:51:33 localhost podman[296431]: 
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.215255645 +0000 UTC m=+0.081347603 container create 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64)
Nov 23 04:51:33 localhost systemd[1]: Started libpod-conmon-591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198.scope.
Nov 23 04:51:33 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.28141773 +0000 UTC m=+0.147509698 container init 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, GIT_CLEAN=True, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container)
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.184193642 +0000 UTC m=+0.050285620 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.291877221 +0000 UTC m=+0.157969229 container start 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, GIT_BRANCH=main, vcs-type=git, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.292287672 +0000 UTC m=+0.158379670 container attach 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, release=553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:51:33 localhost infallible_bouman[296447]: 167 167
Nov 23 04:51:33 localhost systemd[1]: libpod-591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198.scope: Deactivated successfully.
Nov 23 04:51:33 localhost podman[296431]: 2025-11-23 09:51:33.297253625 +0000 UTC m=+0.163345623 container died 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Nov 23 04:51:33 localhost podman[296452]: 2025-11-23 09:51:33.39479077 +0000 UTC m=+0.085417441 container remove 591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_bouman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:51:33 localhost systemd[1]: libpod-conmon-591837a59706a28fcfff8904845f1d91faeaed8224c1f76ff9bcc4c765304198.scope: Deactivated successfully.
Nov 23 04:51:33 localhost ceph-mon[289043]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:51:33 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:51:33 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:33 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:33 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:51:33 localhost systemd[1]: var-lib-containers-storage-overlay-fbe1629730982d40a6e9316407bfd74e1dac45a16f548ca70aee8fa92091fdc8-merged.mount: Deactivated successfully.
Nov 23 04:51:34 localhost podman[296527]: 
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.272070642 +0000 UTC m=+0.076793841 container create b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, version=7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:51:34 localhost systemd[1]: Started libpod-conmon-b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be.scope.
Nov 23 04:51:34 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.333852969 +0000 UTC m=+0.138576168 container init b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, ceph=True, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.240674699 +0000 UTC m=+0.045397928 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.352419416 +0000 UTC m=+0.157142615 container start b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, distribution-scope=public, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.buildah.version=1.33.12, vcs-type=git)
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.352677883 +0000 UTC m=+0.157401082 container attach b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:51:34 localhost elastic_jepsen[296543]: 167 167
Nov 23 04:51:34 localhost systemd[1]: libpod-b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be.scope: Deactivated successfully.
Nov 23 04:51:34 localhost podman[296527]: 2025-11-23 09:51:34.386778719 +0000 UTC m=+0.191501918 container died b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.openshift.expose-services=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:51:34 localhost podman[296542]: 2025-11-23 09:51:34.444700992 +0000 UTC m=+0.130473390 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:51:34 localhost podman[296542]: 2025-11-23 09:51:34.460975088 +0000 UTC m=+0.146747546 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:51:34 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:51:34 localhost podman[296558]: 2025-11-23 09:51:34.537658905 +0000 UTC m=+0.140392477 container remove b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_jepsen, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=)
Nov 23 04:51:34 localhost systemd[1]: libpod-conmon-b185b4c3772f983e1ef53818fbdba7d2cd100b3d098677bb4f2abd9826c006be.scope: Deactivated successfully.
Nov 23 04:51:34 localhost ceph-mon[289043]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:51:34 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:51:34 localhost systemd[1]: tmp-crun.RHQ5EQ.mount: Deactivated successfully.
Nov 23 04:51:34 localhost systemd[1]: var-lib-containers-storage-overlay-8c40f2d80840ca335ec6dc65a9b347788e6d0c0c7366dc4cf150b0949d3ab0b0-merged.mount: Deactivated successfully.
Nov 23 04:51:35 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:35 localhost podman[296641]: 
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.35693368 +0000 UTC m=+0.075168837 container create d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, GIT_CLEAN=True, version=7, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=553, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, build-date=2025-09-24T08:57:55)
Nov 23 04:51:35 localhost systemd[1]: Started libpod-conmon-d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c.scope.
Nov 23 04:51:35 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.326422282 +0000 UTC m=+0.044657449 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.430795491 +0000 UTC m=+0.149030638 container init d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=553, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12)
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.441507729 +0000 UTC m=+0.159742886 container start d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.441787606 +0000 UTC m=+0.160022793 container attach d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553)
Nov 23 04:51:35 localhost exciting_meninsky[296656]: 167 167
Nov 23 04:51:35 localhost systemd[1]: libpod-d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c.scope: Deactivated successfully.
Nov 23 04:51:35 localhost podman[296641]: 2025-11-23 09:51:35.44531808 +0000 UTC m=+0.163553277 container died d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:51:35 localhost podman[296661]: 2025-11-23 09:51:35.537889843 +0000 UTC m=+0.084660291 container remove d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_meninsky, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=)
Nov 23 04:51:35 localhost systemd[1]: libpod-conmon-d06c93687aa163932ba6a2d057216a4e6c8e4a1a8a8bf9bc4b98a61d145abe5c.scope: Deactivated successfully.
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:51:35 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:51:35 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:35 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:35 localhost systemd[1]: var-lib-containers-storage-overlay-652ce21cbdc2dc317aed5a463cd29675503e14a22c7a03977514a751c616543e-merged.mount: Deactivated successfully.
Nov 23 04:51:36 localhost podman[296733]: 
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.237779366 +0000 UTC m=+0.077960852 container create f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:51:36 localhost systemd[1]: Started libpod-conmon-f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631.scope.
Nov 23 04:51:36 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.305690007 +0000 UTC m=+0.145871443 container init f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553, vendor=Red Hat, Inc., ceph=True)
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.208883561 +0000 UTC m=+0.049065047 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.314842984 +0000 UTC m=+0.155024400 container start f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, GIT_CLEAN=True, release=553, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.33.12, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7)
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.315021928 +0000 UTC m=+0.155203374 container attach f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Nov 23 04:51:36 localhost heuristic_maxwell[296748]: 167 167
Nov 23 04:51:36 localhost systemd[1]: libpod-f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631.scope: Deactivated successfully.
Nov 23 04:51:36 localhost podman[296733]: 2025-11-23 09:51:36.317194147 +0000 UTC m=+0.157375573 container died f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, release=553, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Nov 23 04:51:36 localhost podman[296754]: 2025-11-23 09:51:36.404705194 +0000 UTC m=+0.078863096 container remove f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_maxwell, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main)
Nov 23 04:51:36 localhost systemd[1]: libpod-conmon-f04264d030d928fdfad38c07458efc087d9b45136071ab3b8c26fabbf0b8d631.scope: Deactivated successfully.
Nov 23 04:51:36 localhost systemd[1]: var-lib-containers-storage-overlay-4c3599cafe7cb271f822378ee97407288cdee1751016d9108845e1661feb2179-merged.mount: Deactivated successfully.
Nov 23 04:51:36 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:51:36 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:51:36 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:51:36 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:36 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:36 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:36 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:51:36 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:38 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be50a6160 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0
Nov 23 04:51:38 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:51:38 localhost ceph-mon[289043]: paxos.1).electionLogic(42) init, last seen epoch 42
Nov 23 04:51:38 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:38 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:41 localhost podman[240144]: time="2025-11-23T09:51:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:51:41 localhost podman[240144]: @ - - [23/Nov/2025:09:51:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:51:41 localhost podman[240144]: @ - - [23/Nov/2025:09:51:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19178 "" "Go-http-client/1.1"
Nov 23 04:51:42 localhost ceph-mds[286319]: mds.beacon.mds.np0005532586.mfohsb missed beacon ack from the monitors
Nov 23 04:51:43 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:43 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:51:43 localhost ceph-mon[289043]: paxos.1).electionLogic(45) init, last seen epoch 45, mid-election, bumping
Nov 23 04:51:43 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:43 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e11 handle_timecheck drop unexpected msg
Nov 23 04:51:43 localhost ceph-mon[289043]: mon.np0005532586@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:43 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:51:44 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:51:44 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:51:44 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:51:44 localhost ceph-mon[289043]: Health check failed: 1/3 mons down, quorum np0005532583,np0005532586 (MON_DOWN)
Nov 23 04:51:44 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:51:44 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:51:44 localhost ceph-mon[289043]: mon.np0005532583 is new leader, mons np0005532583,np0005532586,np0005532584 in quorum (ranks 0,1,2)
Nov 23 04:51:44 localhost ceph-mon[289043]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005532583,np0005532586)
Nov 23 04:51:44 localhost ceph-mon[289043]: Cluster is now healthy
Nov 23 04:51:44 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:51:44 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:44 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:44 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:44 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:51:44 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:44 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:44 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:44 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:51:45 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:46 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:46 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:46 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:46 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:46 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532583.orhywt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:47 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532583.orhywt (monmap changed)...
Nov 23 04:51:47 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532583.orhywt on np0005532583.localdomain
Nov 23 04:51:47 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:47 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:47 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:48 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:51:48 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:51:48 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:48 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:48 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:49 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:51:49 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:51:49 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:49 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:49 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:51:49 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:51:49 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:49 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:50 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:51:50 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:50 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:51:50 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:51:50 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:51:50 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:50 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:50 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:51:50 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:51 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:51:51 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:51:51 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:51 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:51 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:51:52 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:51:52 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:51:52 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:52 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:52 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:52 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:51:52 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:51:52 localhost podman[297125]: 2025-11-23 09:51:52.196567187 +0000 UTC m=+0.101253636 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Nov 23 04:51:52 localhost systemd[1]: tmp-crun.Uj7HkK.mount: Deactivated successfully.
Nov 23 04:51:52 localhost podman[297127]: 2025-11-23 09:51:52.251613124 +0000 UTC m=+0.150729854 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:51:52 localhost podman[297127]: 2025-11-23 09:51:52.259117715 +0000 UTC m=+0.158234395 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:51:52 localhost podman[297125]: 2025-11-23 09:51:52.264382556 +0000 UTC m=+0.169068955 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 04:51:52 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: ERROR   09:51:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: ERROR   09:51:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: ERROR   09:51:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: ERROR   09:51:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: ERROR   09:51:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:51:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:51:52 localhost podman[297126]: 2025-11-23 09:51:52.283852328 +0000 UTC m=+0.187032908 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 04:51:52 localhost podman[297126]: 2025-11-23 09:51:52.296019884 +0000 UTC m=+0.199200494 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:51:52 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:51:52 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:51:53 localhost systemd[1]: tmp-crun.kKKck4.mount: Deactivated successfully.
Nov 23 04:51:53 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:53 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:53 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:51:53 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:51:53 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:51:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:54 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:51:54 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:51:54 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:51:55 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:51:55 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:51:55 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:51:55 localhost ceph-mon[289043]: Deploying daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:55 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:51:56 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:51:56 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:51:57 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:51:57 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:51:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:57 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:58 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:51:58 localhost podman[297242]: 
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.884564695 +0000 UTC m=+0.066450703 container create 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, release=553, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7, RELEASE=main, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:51:58 localhost systemd[1]: Started libpod-conmon-388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc.scope.
Nov 23 04:51:58 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.861929817 +0000 UTC m=+0.043815805 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.968743882 +0000 UTC m=+0.150629890 container init 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True)
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.979273915 +0000 UTC m=+0.161159913 container start 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main)
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.979586163 +0000 UTC m=+0.161472211 container attach 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=553, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True)
Nov 23 04:51:58 localhost gifted_clarke[297257]: 167 167
Nov 23 04:51:58 localhost systemd[1]: libpod-388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc.scope: Deactivated successfully.
Nov 23 04:51:58 localhost podman[297242]: 2025-11-23 09:51:58.983819387 +0000 UTC m=+0.165705435 container died 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, release=553, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7)
Nov 23 04:51:59 localhost podman[297263]: 2025-11-23 09:51:59.09538488 +0000 UTC m=+0.099281235 container remove 388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Nov 23 04:51:59 localhost systemd[1]: libpod-conmon-388ce2831fd66232f1b5b3d90d4fa39421091a9fe0c4d891b45aea04b1eb36dc.scope: Deactivated successfully.
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:51:59 localhost ceph-mon[289043]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:51:59 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:51:59 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:51:59 localhost podman[297331]: 
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.765489014 +0000 UTC m=+0.072452135 container create d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7)
Nov 23 04:51:59 localhost systemd[1]: Started libpod-conmon-d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61.scope.
Nov 23 04:51:59 localhost systemd[1]: Started libcrun container.
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.822177074 +0000 UTC m=+0.129140185 container init d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, build-date=2025-09-24T08:57:55, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, distribution-scope=public, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.830296262 +0000 UTC m=+0.137259373 container start d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, release=553)
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.830481267 +0000 UTC m=+0.137444388 container attach d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=)
Nov 23 04:51:59 localhost flamboyant_ganguly[297346]: 167 167
Nov 23 04:51:59 localhost systemd[1]: libpod-d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61.scope: Deactivated successfully.
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.834424652 +0000 UTC m=+0.141387773 container died d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, GIT_CLEAN=True, GIT_BRANCH=main, release=553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:51:59 localhost podman[297331]: 2025-11-23 09:51:59.737684498 +0000 UTC m=+0.044647659 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:51:59 localhost systemd[1]: var-lib-containers-storage-overlay-35dbb81254a6627ba453a4514c0ee3696d557d353d58a4b11759095cb7fdaba5-merged.mount: Deactivated successfully.
Nov 23 04:51:59 localhost systemd[1]: var-lib-containers-storage-overlay-5a5ef11343a94f09384e1c12ad3980aaeff54cb102204e047a5e428a58694768-merged.mount: Deactivated successfully.
Nov 23 04:51:59 localhost podman[297351]: 2025-11-23 09:51:59.921378744 +0000 UTC m=+0.079923924 container remove d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_ganguly, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, release=553, distribution-scope=public, vcs-type=git, name=rhceph)
Nov 23 04:51:59 localhost systemd[1]: libpod-conmon-d1d7af724bf1e27952a50be63871412b930ff5172b026ed523b40b991b219d61.scope: Deactivated successfully.
Nov 23 04:52:00 localhost ceph-mon[289043]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:52:00 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:52:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:00 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:52:00 localhost ceph-mon[289043]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:52:00 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:52:00 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:00 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:00 localhost podman[297428]: 
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.740455734 +0000 UTC m=+0.072979448 container create e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public)
Nov 23 04:52:00 localhost systemd[1]: Started libpod-conmon-e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95.scope.
Nov 23 04:52:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.803259108 +0000 UTC m=+0.135782822 container init e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64)
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.710951433 +0000 UTC m=+0.043475167 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.811870439 +0000 UTC m=+0.144394153 container start e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, io.openshift.expose-services=, release=553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, architecture=x86_64)
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.812120746 +0000 UTC m=+0.144644490 container attach e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Nov 23 04:52:00 localhost elated_shtern[297444]: 167 167
Nov 23 04:52:00 localhost systemd[1]: libpod-e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95.scope: Deactivated successfully.
Nov 23 04:52:00 localhost podman[297428]: 2025-11-23 09:52:00.815765824 +0000 UTC m=+0.148289568 container died e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=)
Nov 23 04:52:00 localhost systemd[1]: var-lib-containers-storage-overlay-d406a0afd9ef4e6515672b77e37e0b29938108e6a843da9bfbc19608dd95c5ef-merged.mount: Deactivated successfully.
Nov 23 04:52:00 localhost podman[297449]: 2025-11-23 09:52:00.912064947 +0000 UTC m=+0.090091517 container remove e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shtern, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=553, version=7, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, build-date=2025-09-24T08:57:55)
Nov 23 04:52:00 localhost systemd[1]: libpod-conmon-e988737a9b41eb662cc02c83b08434c377f6dac70d48877f985a3ee9fb268e95.scope: Deactivated successfully.
Nov 23 04:52:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:01 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:52:01 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:52:01 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:52:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5773 writes, 25K keys, 5773 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5773 writes, 781 syncs, 7.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 133 writes, 409 keys, 133 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 133 writes, 57 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:52:01 localhost podman[297526]: 
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.714924762 +0000 UTC m=+0.076079981 container create a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, release=553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:52:01 localhost systemd[1]: Started libpod-conmon-a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967.scope.
Nov 23 04:52:01 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.7737486 +0000 UTC m=+0.134903829 container init a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public)
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.782897595 +0000 UTC m=+0.144052814 container start a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container)
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.783218243 +0000 UTC m=+0.144373482 container attach a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph)
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.685388069 +0000 UTC m=+0.046543298 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:01 localhost dreamy_taussig[297541]: 167 167
Nov 23 04:52:01 localhost systemd[1]: libpod-a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967.scope: Deactivated successfully.
Nov 23 04:52:01 localhost podman[297526]: 2025-11-23 09:52:01.786261575 +0000 UTC m=+0.147416774 container died a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main)
Nov 23 04:52:01 localhost podman[297546]: 2025-11-23 09:52:01.877308508 +0000 UTC m=+0.082527416 container remove a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_taussig, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:52:01 localhost systemd[1]: libpod-conmon-a97ce52ed41d57f72cfabfa4a1ea844133561eca517d2ea17221ea105c981967.scope: Deactivated successfully.
Nov 23 04:52:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:52:01 localhost systemd[1]: var-lib-containers-storage-overlay-7377135f5430501cc91445860179b885add94f80654f6d45e67c3f86841b3c6e-merged.mount: Deactivated successfully.
Nov 23 04:52:02 localhost podman[297563]: 2025-11-23 09:52:02.028055391 +0000 UTC m=+0.110296630 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:52:02 localhost podman[297563]: 2025-11-23 09:52:02.046943017 +0000 UTC m=+0.129184256 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:52:02 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:52:02 localhost podman[297595]: 2025-11-23 09:52:02.127261992 +0000 UTC m=+0.076694748 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:52:02 localhost podman[297592]: 2025-11-23 09:52:02.195553603 +0000 UTC m=+0.148484954 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:52:02 localhost podman[297592]: 2025-11-23 09:52:02.200071775 +0000 UTC m=+0.153003096 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 23 04:52:02 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:52:02 localhost podman[297595]: 2025-11-23 09:52:02.220337578 +0000 UTC m=+0.169770364 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:52:02 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:52:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:02 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:52:02 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:52:02 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:52:02 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:02 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:02 localhost podman[297681]: 
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.573245064 +0000 UTC m=+0.071783207 container create d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, version=7, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-type=git)
Nov 23 04:52:02 localhost systemd[1]: Started libpod-conmon-d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4.scope.
Nov 23 04:52:02 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.630669304 +0000 UTC m=+0.129207447 container init d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.639608114 +0000 UTC m=+0.138146267 container start d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, version=7, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55)
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.639942673 +0000 UTC m=+0.138480856 container attach d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_BRANCH=main, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main)
Nov 23 04:52:02 localhost amazing_newton[297696]: 167 167
Nov 23 04:52:02 localhost systemd[1]: libpod-d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4.scope: Deactivated successfully.
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.543901717 +0000 UTC m=+0.042439890 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:02 localhost podman[297681]: 2025-11-23 09:52:02.643122548 +0000 UTC m=+0.141660711 container died d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, io.buildah.version=1.33.12, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:52:02 localhost podman[297701]: 2025-11-23 09:52:02.713457054 +0000 UTC m=+0.064995104 container remove d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_newton, version=7, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container)
Nov 23 04:52:02 localhost systemd[1]: libpod-conmon-d86a1aa774f4938f4fc07382ad4ef59c340af8ac2f0d300e3c0f9b89c8ac37b4.scope: Deactivated successfully.
Nov 23 04:52:03 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:03 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.041 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.041 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.042 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.042 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.042 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:52:04 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:04 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:52:04 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2486142759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.452 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.690 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.691 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12010MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.692 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.692 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.787 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.787 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:52:04 localhost nova_compute[281613]: 2025-11-23 09:52:04.804 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:52:05 localhost podman[297828]: 2025-11-23 09:52:05.183585699 +0000 UTC m=+0.085712640 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:52:05 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:52:05 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3793959948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:52:05 localhost podman[297828]: 2025-11-23 09:52:05.22498308 +0000 UTC m=+0.127110061 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 04:52:05 localhost nova_compute[281613]: 2025-11-23 09:52:05.225 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:52:05 localhost nova_compute[281613]: 2025-11-23 09:52:05.231 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:52:05 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:52:05 localhost nova_compute[281613]: 2025-11-23 09:52:05.255 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:52:05 localhost nova_compute[281613]: 2025-11-23 09:52:05.257 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:52:05 localhost nova_compute[281613]: 2025-11-23 09:52:05.257 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:52:05 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:05 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:05 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:52:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5071 writes, 22K keys, 5071 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5071 writes, 751 syncs, 6.75 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 142 writes, 369 keys, 142 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 142 writes, 67 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.256 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.258 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.258 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.279 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.280 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.280 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.281 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.281 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:06 localhost nova_compute[281613]: 2025-11-23 09:52:06.282 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:52:06 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:06 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:06 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:06 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:08 localhost nova_compute[281613]: 2025-11-23 09:52:08.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:08 localhost nova_compute[281613]: 2025-11-23 09:52:08.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:52:08 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:52:09.257 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:52:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:52:09.258 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:52:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:52:09.258 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:52:09 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 04:52:09 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1611615758' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Nov 23 04:52:09 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:10 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:10 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost podman[240144]: time="2025-11-23T09:52:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:52:11 localhost podman[240144]: @ - - [23/Nov/2025:09:52:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:52:11 localhost podman[240144]: @ - - [23/Nov/2025:09:52:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19180 "" "Go-http-client/1.1"
Nov 23 04:52:11 localhost ceph-mon[289043]: Reconfig service osd.default_drive_group
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:11 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:52:11 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:52:12 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:12 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:12 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 handle_command mon_command({"prefix": "mgr fail"} v 0)
Nov 23 04:52:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/885747258' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:52:12 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 e85: 6 total, 6 up, 6 in
Nov 23 04:52:12 localhost systemd[1]: session-66.scope: Deactivated successfully.
Nov 23 04:52:12 localhost systemd[1]: session-66.scope: Consumed 18.965s CPU time.
Nov 23 04:52:12 localhost systemd-logind[761]: Session 66 logged out. Waiting for processes to exit.
Nov 23 04:52:12 localhost systemd-logind[761]: Removed session 66.
Nov 23 04:52:13 localhost sshd[297886]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17301 172.18.0.106:0/4082313214' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:52:13 localhost ceph-mon[289043]: from='client.? 172.18.0.200:0/885747258' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: Activating manager daemon np0005532585.gzafiw
Nov 23 04:52:13 localhost ceph-mon[289043]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:52:13 localhost ceph-mon[289043]: Manager daemon np0005532585.gzafiw is now available
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"}]': finished
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532582.localdomain.devices.0"}]': finished
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532585.gzafiw/mirror_snapshot_schedule"} : dispatch
Nov 23 04:52:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532585.gzafiw/trash_purge_schedule"} : dispatch
Nov 23 04:52:13 localhost systemd-logind[761]: New session 67 of user ceph-admin.
Nov 23 04:52:13 localhost systemd[1]: Started Session 67 of User ceph-admin.
Nov 23 04:52:14 localhost ceph-mon[289043]: removing stray HostCache host record np0005532582.localdomain.devices.0
Nov 23 04:52:14 localhost podman[297996]: 2025-11-23 09:52:14.177505947 +0000 UTC m=+0.089719778 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:52:14 localhost podman[297996]: 2025-11-23 09:52:14.282897954 +0000 UTC m=+0.195111775 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container)
Nov 23 04:52:14 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:15 localhost ceph-mon[289043]: [23/Nov/2025:09:52:14] ENGINE Bus STARTING
Nov 23 04:52:15 localhost ceph-mon[289043]: [23/Nov/2025:09:52:14] ENGINE Serving on http://172.18.0.107:8765
Nov 23 04:52:15 localhost ceph-mon[289043]: [23/Nov/2025:09:52:14] ENGINE Serving on https://172.18.0.107:7150
Nov 23 04:52:15 localhost ceph-mon[289043]: [23/Nov/2025:09:52:14] ENGINE Client ('172.18.0.107', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:52:15 localhost ceph-mon[289043]: [23/Nov/2025:09:52:14] ENGINE Bus STARTED
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:15 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd/host:np0005532583", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:16 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:16 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:52:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:18 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:52:18 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:52:18 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:52:18 localhost ceph-mon[289043]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:52:18 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:52:18 localhost ceph-mon[289043]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:18 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:19 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:52:19 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:52:19 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:52:19 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:52:19 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:19 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:20 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:52:20 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:52:20 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:52:20 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:20 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:52:20 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:20 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:21 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:52:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: ERROR   09:52:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: ERROR   09:52:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: ERROR   09:52:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: ERROR   09:52:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: ERROR   09:52:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:52:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:52:22 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:52:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:52:22 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:22 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:22 localhost sshd[298897]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:52:23 localhost podman[298918]: 2025-11-23 09:52:23.182694817 +0000 UTC m=+0.096198871 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6)
Nov 23 04:52:23 localhost podman[298924]: 2025-11-23 09:52:23.227778326 +0000 UTC m=+0.137410466 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:52:23 localhost podman[298924]: 2025-11-23 09:52:23.238956466 +0000 UTC m=+0.148588596 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:52:23 localhost podman[298918]: 2025-11-23 09:52:23.247344541 +0000 UTC m=+0.160848605 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6)
Nov 23 04:52:23 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:52:23 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:52:23 localhost podman[298921]: 2025-11-23 09:52:23.334663593 +0000 UTC m=+0.247153850 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:52:23 localhost podman[298921]: 2025-11-23 09:52:23.345468643 +0000 UTC m=+0.257958930 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:52:23 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:52:23 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:52:23 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:52:23 localhost podman[299015]: 
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.587635829 +0000 UTC m=+0.077821909 container create 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:52:23 localhost systemd[1]: Started libpod-conmon-03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164.scope.
Nov 23 04:52:23 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.64885072 +0000 UTC m=+0.139036770 container init 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.556074142 +0000 UTC m=+0.046260262 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.659019433 +0000 UTC m=+0.149205483 container start 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, name=rhceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.659490345 +0000 UTC m=+0.149676395 container attach 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.33.12, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:52:23 localhost suspicious_liskov[299030]: 167 167
Nov 23 04:52:23 localhost systemd[1]: libpod-03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164.scope: Deactivated successfully.
Nov 23 04:52:23 localhost podman[299015]: 2025-11-23 09:52:23.66336185 +0000 UTC m=+0.153547910 container died 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main)
Nov 23 04:52:23 localhost podman[299035]: 2025-11-23 09:52:23.760179266 +0000 UTC m=+0.088338660 container remove 03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_liskov, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, RELEASE=main, release=553, GIT_CLEAN=True)
Nov 23 04:52:23 localhost systemd[1]: libpod-conmon-03f1ade94897971398e74137d3b1680b997478123d35fefb9ba8832b8d7b2164.scope: Deactivated successfully.
Nov 23 04:52:24 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:24 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:24 localhost podman[299112]: 
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.585655318 +0000 UTC m=+0.078443655 container create 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=, ceph=True, RELEASE=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:52:24 localhost systemd[1]: Started libpod-conmon-623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b.scope.
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.554822471 +0000 UTC m=+0.047610838 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:24 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.691106486 +0000 UTC m=+0.183894823 container init 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, name=rhceph, distribution-scope=public, build-date=2025-09-24T08:57:55)
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.700092737 +0000 UTC m=+0.192881074 container start 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, release=553, vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, build-date=2025-09-24T08:57:55, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph)
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.700415176 +0000 UTC m=+0.193203513 container attach 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.33.12, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, release=553)
Nov 23 04:52:24 localhost sleepy_brahmagupta[299127]: 167 167
Nov 23 04:52:24 localhost systemd[1]: libpod-623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b.scope: Deactivated successfully.
Nov 23 04:52:24 localhost podman[299112]: 2025-11-23 09:52:24.705633156 +0000 UTC m=+0.198421543 container died 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.33.12, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, CEPH_POINT_RELEASE=, release=553, io.openshift.expose-services=, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7)
Nov 23 04:52:24 localhost podman[299132]: 2025-11-23 09:52:24.798409114 +0000 UTC m=+0.079469853 container remove 623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_brahmagupta, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:52:24 localhost systemd[1]: libpod-conmon-623a7ab93bff3f3d720963741705fd017c0cb9c03785abc399ef43fd0a1d225b.scope: Deactivated successfully.
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.948398) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544948466, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2393, "num_deletes": 267, "total_data_size": 7368199, "memory_usage": 7671520, "flush_reason": "Manual Compaction"}
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544970430, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 4100031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18160, "largest_seqno": 20548, "table_properties": {"data_size": 4090843, "index_size": 5437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23448, "raw_average_key_size": 21, "raw_value_size": 4070661, "raw_average_value_size": 3811, "num_data_blocks": 230, "num_entries": 1068, "num_filter_entries": 1068, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891492, "oldest_key_time": 1763891492, "file_creation_time": 1763891544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 22092 microseconds, and 8246 cpu microseconds.
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:52:24 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:24 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:24 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:24 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:24 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:52:24 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.970493) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 4100031 bytes OK
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.970551) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.973971) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.973996) EVENT_LOG_v1 {"time_micros": 1763891544973990, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.974020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7356461, prev total WAL file size 7370474, number of live WAL files 2.
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.975737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323634' seq:72057594037927935, type:22 .. '6B760031353238' seq:0, type:0; will stop at (end)
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(4003KB)], [27(15MB)]
Nov 23 04:52:24 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891544975793, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20221474, "oldest_snapshot_seqno": -1}
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11185 keys, 19357674 bytes, temperature: kUnknown
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545067953, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 19357674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19289968, "index_size": 38677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 299603, "raw_average_key_size": 26, "raw_value_size": 19095037, "raw_average_value_size": 1707, "num_data_blocks": 1480, "num_entries": 11185, "num_filter_entries": 11185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.069307) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 19357674 bytes
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.074297) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.2 rd, 209.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 15.4 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(9.7) write-amplify(4.7) OK, records in: 11681, records dropped: 496 output_compression: NoCompression
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.074338) EVENT_LOG_v1 {"time_micros": 1763891545074320, "job": 14, "event": "compaction_finished", "compaction_time_micros": 92238, "compaction_time_cpu_micros": 50652, "output_level": 6, "num_output_files": 1, "total_output_size": 19357674, "num_input_records": 11681, "num_output_records": 11185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545075360, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891545078057, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:24.975612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.078208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.078214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.078217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.078220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:25.078224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:25 localhost systemd[1]: var-lib-containers-storage-overlay-df220bf6f2994b15af01146d41de4c13bfc0438f15e9b96ee30bcec51c6dac95-merged.mount: Deactivated successfully.
Nov 23 04:52:25 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:26 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:26 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:27 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:27 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:27 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:27 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:28 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:52:28 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:52:28 localhost ceph-mon[289043]: Reconfiguring mon.np0005532583 (monmap changed)...
Nov 23 04:52:28 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:52:28 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:28 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:28 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:52:29 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:52:29 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:29 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:52:29 localhost podman[299244]: 
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.811202248 +0000 UTC m=+0.072937877 container create 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, name=rhceph, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7)
Nov 23 04:52:29 localhost systemd[1]: Started libpod-conmon-1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d.scope.
Nov 23 04:52:29 localhost systemd[1]: Started libcrun container.
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.87537494 +0000 UTC m=+0.137110569 container init 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, version=7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.781830061 +0000 UTC m=+0.043565750 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.884928765 +0000 UTC m=+0.146664394 container start 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-09-24T08:57:55, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7)
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.885378888 +0000 UTC m=+0.147114567 container attach 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True)
Nov 23 04:52:29 localhost romantic_mestorf[299260]: 167 167
Nov 23 04:52:29 localhost systemd[1]: libpod-1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d.scope: Deactivated successfully.
Nov 23 04:52:29 localhost podman[299244]: 2025-11-23 09:52:29.888511742 +0000 UTC m=+0.150247431 container died 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, name=rhceph, io.openshift.expose-services=, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553)
Nov 23 04:52:29 localhost podman[299265]: 2025-11-23 09:52:29.982616616 +0000 UTC m=+0.082558065 container remove 1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_mestorf, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Nov 23 04:52:29 localhost systemd[1]: libpod-conmon-1fb0e50ab369ccd64002c3be4fb94d85fa6c0475dece61667930f32c5515463d.scope: Deactivated successfully.
Nov 23 04:52:30 localhost ceph-mon[289043]: Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:52:30 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:52:30 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:30 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:30 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:30 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:30 localhost systemd[1]: var-lib-containers-storage-overlay-e1ff38ec8365b4b5bb2d6303e0f024e75d7c90bd0fb27283649839ad291bda6b-merged.mount: Deactivated successfully.
Nov 23 04:52:32 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:32 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:52:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:52:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:52:32 localhost podman[299283]: 2025-11-23 09:52:32.866753945 +0000 UTC m=+0.092781099 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:52:32 localhost podman[299283]: 2025-11-23 09:52:32.876591979 +0000 UTC m=+0.102619133 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:52:32 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:52:32 localhost systemd[1]: tmp-crun.UeSxPH.mount: Deactivated successfully.
Nov 23 04:52:32 localhost podman[299284]: 2025-11-23 09:52:32.977016273 +0000 UTC m=+0.200230271 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:52:33 localhost podman[299284]: 2025-11-23 09:52:33.020948491 +0000 UTC m=+0.244162529 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:52:33 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:52:33 localhost podman[299282]: 2025-11-23 09:52:33.027178258 +0000 UTC m=+0.254213960 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 23 04:52:33 localhost podman[299282]: 2025-11-23 09:52:33.110970126 +0000 UTC m=+0.338005838 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:52:33 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 23 04:52:33 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2656055149' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Nov 23 04:52:33 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:52:33 localhost systemd[1]: tmp-crun.oelF9G.mount: Deactivated successfully.
Nov 23 04:52:34 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:35 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:52:36 localhost podman[299348]: 2025-11-23 09:52:36.172144194 +0000 UTC m=+0.079533875 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 04:52:36 localhost podman[299348]: 2025-11-23 09:52:36.187004962 +0000 UTC m=+0.094394643 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 23 04:52:36 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:52:36 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:38 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.452432) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559452845, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 640, "num_deletes": 251, "total_data_size": 669873, "memory_usage": 682328, "flush_reason": "Manual Compaction"}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559458975, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 421927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20553, "largest_seqno": 21188, "table_properties": {"data_size": 418588, "index_size": 1194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9043, "raw_average_key_size": 21, "raw_value_size": 411513, "raw_average_value_size": 970, "num_data_blocks": 48, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891544, "oldest_key_time": 1763891544, "file_creation_time": 1763891559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6278 microseconds, and 2680 cpu microseconds.
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.459034) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 421927 bytes OK
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.459060) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.460933) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.460956) EVENT_LOG_v1 {"time_micros": 1763891559460950, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.460982) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 666162, prev total WAL file size 666162, number of live WAL files 2.
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.461618) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end)
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(412KB)], [30(18MB)]
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559461668, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19779601, "oldest_snapshot_seqno": -1}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11084 keys, 16904215 bytes, temperature: kUnknown
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559570118, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 16904215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16839227, "index_size": 36215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 298200, "raw_average_key_size": 26, "raw_value_size": 16648042, "raw_average_value_size": 1501, "num_data_blocks": 1373, "num_entries": 11084, "num_filter_entries": 11084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891315, "oldest_key_time": 0, "file_creation_time": 1763891559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1978a202-f4a3-46d3-80fc-ff640bbe93f1", "db_session_id": "HZ2TGTJV75XDPMSTGKX9", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.570827) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 16904215 bytes
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.572746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.6 rd, 155.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.5 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(86.9) write-amplify(40.1) OK, records in: 11609, records dropped: 525 output_compression: NoCompression
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.572786) EVENT_LOG_v1 {"time_micros": 1763891559572769, "job": 16, "event": "compaction_finished", "compaction_time_micros": 108908, "compaction_time_cpu_micros": 43294, "output_level": 6, "num_output_files": 1, "total_output_size": 16904215, "num_input_records": 11609, "num_output_records": 11084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559573791, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891559578402, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.461494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.578677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.578683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.578687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.578690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:39 localhost ceph-mon[289043]: rocksdb: (Original Log Time 2025/11/23-09:52:39.578693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:52:40 localhost ceph-mon[289043]: mon.np0005532586@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:40 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:40 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:41 localhost podman[240144]: time="2025-11-23T09:52:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:52:41 localhost podman[240144]: @ - - [23/Nov/2025:09:52:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:52:41 localhost podman[240144]: @ - - [23/Nov/2025:09:52:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19172 "" "Go-http-client/1.1"
Nov 23 04:52:41 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@1(peon) e12  my rank is now 0 (was 1)
Nov 23 04:52:41 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Nov 23 04:52:41 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Nov 23 04:52:41 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be5558000 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:52:41 localhost ceph-mon[289043]: paxos.0).electionLogic(48) init, last seen epoch 48
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : monmap epoch 12
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:52:41.580241+0000
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005532586
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 29s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e12 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:41 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 04:52:41 localhost ceph-mon[289043]: Remove daemons mon.np0005532583
Nov 23 04:52:41 localhost ceph-mon[289043]: Safe to remove mon.np0005532583: new quorum should be ['np0005532586', 'np0005532584'] (from ['np0005532586', 'np0005532584'])
Nov 23 04:52:41 localhost ceph-mon[289043]: Removing monitor np0005532583 from monmap...
Nov 23 04:52:41 localhost ceph-mon[289043]: Removing daemon mon.np0005532583 from np0005532583.localdomain -- ports []
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:52:41 localhost ceph-mon[289043]: mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1)
Nov 23 04:52:41 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:41 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:42 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e12  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e12  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(leader).monmap v12 adding/updating np0005532585 at [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to monitor cluster
Nov 23 04:52:42 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be5558160 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0)
Nov 23 04:52:42 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:42 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0)
Nov 23 04:52:42 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch
Nov 23 04:52:42 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:52:42 localhost ceph-mon[289043]: paxos.0).electionLogic(50) init, last seen epoch 50
Nov 23 04:52:42 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:43 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:52:43 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:52:43 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:52:43 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:52:43 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:43 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:44 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:44 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:45 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:45 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:46 localhost sshd[299686]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:52:46 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:46 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:46 localhost ceph-mds[286319]: mds.beacon.mds.np0005532586.mfohsb missed beacon ack from the monitors
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : monmap epoch 13
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:52:42.566799+0000
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005532586
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 34s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005532586,np0005532584 (MON_DOWN)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum np0005532586,np0005532584
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005532586,np0005532584
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(cluster) log [WRN] :     mon.np0005532585 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:47 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:47 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:52:47 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586 is new leader, mons np0005532586,np0005532584 in quorum (ranks 0,1)
Nov 23 04:52:47 localhost ceph-mon[289043]: Health check failed: 1/3 mons down, quorum np0005532586,np0005532584 (MON_DOWN)
Nov 23 04:52:47 localhost ceph-mon[289043]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005532586,np0005532584
Nov 23 04:52:47 localhost ceph-mon[289043]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005532586,np0005532584
Nov 23 04:52:47 localhost ceph-mon[289043]:    mon.np0005532585 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Nov 23 04:52:47 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:52:47 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:52:47 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:48 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:52:48 localhost ceph-mon[289043]: Deploying daemon mon.np0005532583 on np0005532583.localdomain
Nov 23 04:52:48 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:48 localhost ceph-mon[289043]: Removed label mon from host np0005532583.localdomain
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:52:49 localhost ceph-mon[289043]: paxos.0).electionLogic(52) init, last seen epoch 52
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2)
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : monmap epoch 13
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:52:42.566799+0000
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005532586
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 37s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005532586,np0005532584)
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 23 04:52:49 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:52:49 localhost ceph-mon[289043]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2)
Nov 23 04:52:49 localhost ceph-mon[289043]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005532586,np0005532584)
Nov 23 04:52:49 localhost ceph-mon[289043]: Cluster is now healthy
Nov 23 04:52:49 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader).monmap v13 adding/updating np0005532583 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:50 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be55582c0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0)
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch
Nov 23 04:52:50 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:52:50 localhost ceph-mon[289043]: paxos.0).electionLogic(54) init, last seen epoch 54
Nov 23 04:52:50 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:51 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:51 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:52 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: ERROR   09:52:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: ERROR   09:52:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: ERROR   09:52:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: ERROR   09:52:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: ERROR   09:52:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:52:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:52:52 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:52 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:52 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:52:53 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:53 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:52:54 localhost systemd[1]: tmp-crun.5m9Jje.mount: Deactivated successfully.
Nov 23 04:52:54 localhost podman[299707]: 2025-11-23 09:52:54.192883031 +0000 UTC m=+0.092517452 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 23 04:52:54 localhost podman[299706]: 2025-11-23 09:52:54.22866372 +0000 UTC m=+0.131674543 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 04:52:54 localhost podman[299707]: 2025-11-23 09:52:54.254159564 +0000 UTC m=+0.153794025 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:52:54 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:52:54 localhost podman[299706]: 2025-11-23 09:52:54.267459541 +0000 UTC m=+0.170470394 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Nov 23 04:52:54 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:52:54 localhost podman[299708]: 2025-11-23 09:52:54.345861154 +0000 UTC m=+0.241967571 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:52:54 localhost podman[299708]: 2025-11-23 09:52:54.382916067 +0000 UTC m=+0.279022504 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:52:54 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:52:54 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:54 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:54 localhost ceph-mds[286319]: mds.beacon.mds.np0005532586.mfohsb missed beacon ack from the monitors
Nov 23 04:52:54 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_auth_request failed to assign global_id
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_auth_request failed to assign global_id
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_auth_request failed to assign global_id
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:55 localhost ceph-mon[289043]: paxos.0).electionLogic(55) init, last seen epoch 55, mid-election, bumping
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585,np0005532583 in quorum (ranks 0,1,2,3)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : monmap epoch 14
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:52:50.591476+0000
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005532586
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005532583
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 43s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:52:55 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532583 calling monitor election
Nov 23 04:52:55 localhost ceph-mon[289043]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585,np0005532583 in quorum (ranks 0,1,2,3)
Nov 23 04:52:55 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:52:55 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:55 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:55 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:56 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:52:56 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:56 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532583"} v 0)
Nov 23 04:52:56 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532583"} : dispatch
Nov 23 04:52:56 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:56 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:56 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:52:56 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:56 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:52:56 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:52:57 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:57 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:52:57 localhost ceph-mon[289043]: Removed label mgr from host np0005532583.localdomain
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:57 localhost ceph-mon[289043]: Removing daemon mgr.np0005532583.orhywt from np0005532583.localdomain -- ports [8765]
Nov 23 04:52:58 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:58 localhost ceph-mon[289043]: Removed label _admin from host np0005532583.localdomain
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005532583.orhywt"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth rm", "entity": "mgr.np0005532583.orhywt"} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005532583.orhywt"}]': finished
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "mon ok-to-stop", "ids": ["np0005532583"]} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon ok-to-stop", "ids": ["np0005532583"]} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "quorum_status"} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e14 handle_command mon_command({"prefix": "mon rm", "name": "np0005532583"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon rm", "name": "np0005532583"} : dispatch
Nov 23 04:52:59 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557bdb8bf1e0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(probing) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:52:59 localhost ceph-mon[289043]: paxos.0).electionLogic(58) init, last seen epoch 58
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : monmap epoch 15
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : last_changed 2025-11-23T09:52:59.410256+0000
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : created 2025-11-23T07:39:05.590972+0000
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005532586
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005532584
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005532585
Nov 23 04:52:59 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005532586.mfohsb=up:active} 2 up:standby
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : osdmap e85: 6 total, 6 up, 6 in
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [DBG] : mgrmap e33: np0005532585.gzafiw(active, since 46s), standbys: np0005532586.thmvqb, np0005532583.orhywt, np0005532584.naxwxy
Nov 23 04:52:59 localhost ceph-mon[289043]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:53:00 localhost ceph-mon[289043]: Removing key for mgr.np0005532583.orhywt
Nov 23 04:53:00 localhost ceph-mon[289043]: Safe to remove mon.np0005532583: new quorum should be ['np0005532586', 'np0005532584', 'np0005532585'] (from ['np0005532586', 'np0005532584', 'np0005532585'])
Nov 23 04:53:00 localhost ceph-mon[289043]: Removing monitor np0005532583 from monmap...
Nov 23 04:53:00 localhost ceph-mon[289043]: Removing daemon mon.np0005532583 from np0005532583.localdomain -- ports []
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532586 calling monitor election
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532584 calling monitor election
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2)
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532585 calling monitor election
Nov 23 04:53:00 localhost ceph-mon[289043]: overall HEALTH_OK
Nov 23 04:53:00 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:53:00 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:01 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:53:01 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:01 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:53:01 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:01 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:53:01 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:53:01 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:01 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:01 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:02 localhost nova_compute[281613]: 2025-11-23 09:53:02.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:02 localhost nova_compute[281613]: 2025-11-23 09:53:02.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 04:53:02 localhost nova_compute[281613]: 2025-11-23 09:53:02.037 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 04:53:02 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:53:02 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:02 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:53:02 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:02 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:02 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:02 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:53:02 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:53:03 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:53:03 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:03 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:53:03 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:53:03 localhost systemd[1]: tmp-crun.dK6e8F.mount: Deactivated successfully.
Nov 23 04:53:03 localhost podman[300140]: 2025-11-23 09:53:03.084161075 +0000 UTC m=+0.107859884 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:53:03 localhost podman[300140]: 2025-11-23 09:53:03.122986586 +0000 UTC m=+0.146685375 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:53:03 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:53:03 localhost systemd[1]: tmp-crun.UHdvCI.mount: Deactivated successfully.
Nov 23 04:53:03 localhost podman[300174]: 2025-11-23 09:53:03.224607173 +0000 UTC m=+0.147252741 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:53:03 localhost podman[300174]: 2025-11-23 09:53:03.264755809 +0000 UTC m=+0.187401367 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:53:03 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:53:03 localhost podman[300210]: 2025-11-23 09:53:03.359308896 +0000 UTC m=+0.193990295 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:53:03 localhost podman[300210]: 2025-11-23 09:53:03.394439687 +0000 UTC m=+0.229121066 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 23 04:53:03 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:53:03 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:03 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:03 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:03 localhost ceph-mon[289043]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:03 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:03 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:03 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:03 localhost ceph-mon[289043]: Removing np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:53:03 localhost ceph-mon[289043]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:53:03 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:03 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:04 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:04 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:05 localhost nova_compute[281613]: 2025-11-23 09:53:05.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:05 localhost nova_compute[281613]: 2025-11-23 09:53:05.053 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:05 localhost nova_compute[281613]: 2025-11-23 09:53:05.053 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:53:05 localhost ceph-mon[289043]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:05 localhost ceph-mon[289043]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:05 localhost ceph-mon[289043]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain.devices.0}] v 0)
Nov 23 04:53:05 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532583.localdomain}] v 0)
Nov 23 04:53:05 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 04:53:05 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:05 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:05 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:53:05 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.032 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.032 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.034 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.047 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.048 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.048 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.049 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.049 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:53:06 localhost ceph-mon[289043]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:53:06 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:53:06 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:06 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:06 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:06 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:06 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 23 04:53:06 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:53:06 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:06 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:06 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:53:06 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3407580757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.545 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.744 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.745 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11958MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.746 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:53:06 localhost nova_compute[281613]: 2025-11-23 09:53:06.746 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.049 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.050 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:53:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.143 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:53:07 localhost podman[300528]: 2025-11-23 09:53:07.177685783 +0000 UTC m=+0.081640800 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:53:07 localhost podman[300528]: 2025-11-23 09:53:07.187488546 +0000 UTC m=+0.091443603 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd)
Nov 23 04:53:07 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.276 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.277 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.300 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.332 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:53:07 localhost ceph-mon[289043]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:53:07 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:53:07 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:07 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:07 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.356 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:53:07 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:07 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:07 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:07 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:07 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Nov 23 04:53:07 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:53:07 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:07 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:07 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:53:07 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3513795904' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.773 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.779 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.797 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.800 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:53:07 localhost nova_compute[281613]: 2025-11-23 09:53:07.800 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:53:08 localhost ceph-mon[289043]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:53:08 localhost ceph-mon[289043]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:53:08 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:08 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:08 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:53:08 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:08 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:08 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:08 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:08 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 04:53:08 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:08 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:08 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:08 localhost nova_compute[281613]: 2025-11-23 09:53:08.786 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:08 localhost nova_compute[281613]: 2025-11-23 09:53:08.787 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:09 localhost nova_compute[281613]: 2025-11-23 09:53:09.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:09 localhost nova_compute[281613]: 2025-11-23 09:53:09.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:09 localhost nova_compute[281613]: 2025-11-23 09:53:09.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:09 localhost nova_compute[281613]: 2025-11-23 09:53:09.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 04:53:09 localhost nova_compute[281613]: 2025-11-23 09:53:09.041 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:53:09.258 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:53:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:53:09.258 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:53:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:53:09.259 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:53:09 localhost ceph-mon[289043]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:53:09 localhost ceph-mon[289043]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:53:09 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mgr services"} : dispatch
Nov 23 04:53:09 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:09 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:53:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:53:10 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:53:10 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:53:10 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:53:10 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:10 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:10 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:10 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:10 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:10 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:10 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:10 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:11 localhost podman[240144]: time="2025-11-23T09:53:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:53:11 localhost podman[240144]: @ - - [23/Nov/2025:09:53:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:53:11 localhost podman[240144]: @ - - [23/Nov/2025:09:53:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19175 "" "Go-http-client/1.1"
Nov 23 04:53:11 localhost ceph-mon[289043]: Added label _no_schedule to host np0005532583.localdomain
Nov 23 04:53:11 localhost ceph-mon[289043]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532583.localdomain
Nov 23 04:53:11 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:53:11 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:53:11 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:11 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:11 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:11 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:11 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"} v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"} : dispatch
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"}]': finished
Nov 23 04:53:12 localhost ceph-mon[289043]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:53:12 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:53:12 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:12 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"} : dispatch
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:53:12 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:12 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:13 localhost ceph-mon[289043]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:53:13 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:53:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"}]': finished
Nov 23 04:53:13 localhost ceph-mon[289043]: Removed host np0005532583.localdomain
Nov 23 04:53:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:13 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:53:13 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:13 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:13 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:13 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:13 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Nov 23 04:53:13 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:53:13 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:13 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:14 localhost ceph-mon[289043]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:53:14 localhost ceph-mon[289043]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:53:14 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:14 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:14 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:53:14 localhost ceph-mon[289043]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:53:14 localhost ceph-mon[289043]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:53:14 localhost sshd[300569]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:53:14 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:14 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:14 localhost systemd[1]: Created slice User Slice of UID 1003.
Nov 23 04:53:14 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:14 localhost systemd[1]: Starting User Runtime Directory /run/user/1003...
Nov 23 04:53:14 localhost systemd-logind[761]: New session 68 of user tripleo-admin.
Nov 23 04:53:14 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:14 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 04:53:14 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:14 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:14 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:14 localhost systemd[1]: Finished User Runtime Directory /run/user/1003.
Nov 23 04:53:14 localhost systemd[1]: Starting User Manager for UID 1003...
Nov 23 04:53:15 localhost systemd[300573]: Queued start job for default target Main User Target.
Nov 23 04:53:15 localhost systemd[300573]: Created slice User Application Slice.
Nov 23 04:53:15 localhost systemd[300573]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 23 04:53:15 localhost systemd[300573]: Started Daily Cleanup of User's Temporary Directories.
Nov 23 04:53:15 localhost systemd[300573]: Reached target Paths.
Nov 23 04:53:15 localhost systemd[300573]: Reached target Timers.
Nov 23 04:53:15 localhost systemd[300573]: Starting D-Bus User Message Bus Socket...
Nov 23 04:53:15 localhost systemd[300573]: Starting Create User's Volatile Files and Directories...
Nov 23 04:53:15 localhost systemd[300573]: Finished Create User's Volatile Files and Directories.
Nov 23 04:53:15 localhost systemd[300573]: Listening on D-Bus User Message Bus Socket.
Nov 23 04:53:15 localhost systemd[300573]: Reached target Sockets.
Nov 23 04:53:15 localhost systemd[300573]: Reached target Basic System.
Nov 23 04:53:15 localhost systemd[300573]: Reached target Main User Target.
Nov 23 04:53:15 localhost systemd[300573]: Startup finished in 148ms.
Nov 23 04:53:15 localhost systemd[1]: Started User Manager for UID 1003.
Nov 23 04:53:15 localhost systemd[1]: Started Session 68 of User tripleo-admin.
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:15 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:15 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 04:53:15 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 04:53:15 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mgr services"} : dispatch
Nov 23 04:53:15 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:15 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:15 localhost python3[300715]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line=    - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:15 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:53:15 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:15 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:16 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:16 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:16 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:16 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:16 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:53:16 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:16 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:53:16 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:53:16 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:16 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:16 localhost python3[300861]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:53:16 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:53:16 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:53:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:16 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:17 localhost python3[301006]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 04:53:17 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:53:17 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:17 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:53:17 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:17 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 23 04:53:17 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:17 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:17 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:17 localhost ceph-mon[289043]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:53:17 localhost ceph-mon[289043]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:53:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:17 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:18 localhost podman[301061]: 
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.231718368 +0000 UTC m=+0.069285869 container create 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Nov 23 04:53:18 localhost systemd[1]: Started libpod-conmon-6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1.scope.
Nov 23 04:53:18 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.206068129 +0000 UTC m=+0.043635710 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.312647349 +0000 UTC m=+0.150214870 container init 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.323277833 +0000 UTC m=+0.160845324 container start 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, ceph=True, RELEASE=main, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.323494659 +0000 UTC m=+0.161062220 container attach 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, release=553, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container)
Nov 23 04:53:18 localhost goofy_meninsky[301076]: 167 167
Nov 23 04:53:18 localhost systemd[1]: libpod-6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1.scope: Deactivated successfully.
Nov 23 04:53:18 localhost podman[301061]: 2025-11-23 09:53:18.327911188 +0000 UTC m=+0.165478769 container died 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:53:18 localhost podman[301081]: 2025-11-23 09:53:18.425710071 +0000 UTC m=+0.087531098 container remove 6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_meninsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:53:18 localhost systemd[1]: libpod-conmon-6f38d367417106aa1b155de2e1bd989c7ababb8d957f28d1234cf667b9028dc1.scope: Deactivated successfully.
Nov 23 04:53:18 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:18 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:18 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:18 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:18 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 23 04:53:18 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:53:18 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:18 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:18 localhost ceph-mon[289043]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:53:18 localhost ceph-mon[289043]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:53:18 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:18 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:18 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:53:19 localhost podman[301153]: 
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.109612985 +0000 UTC m=+0.058565942 container create 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main)
Nov 23 04:53:19 localhost systemd[1]: Started libpod-conmon-1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517.scope.
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.079383194 +0000 UTC m=+0.028336151 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:19 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.194592804 +0000 UTC m=+0.143545721 container init 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main)
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.203566125 +0000 UTC m=+0.152519042 container start 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7)
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.203837233 +0000 UTC m=+0.152790200 container attach 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:53:19 localhost suspicious_grothendieck[301168]: 167 167
Nov 23 04:53:19 localhost systemd[1]: libpod-1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517.scope: Deactivated successfully.
Nov 23 04:53:19 localhost podman[301153]: 2025-11-23 09:53:19.206225457 +0000 UTC m=+0.155178374 container died 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, distribution-scope=public, vcs-type=git, RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:53:19 localhost systemd[1]: var-lib-containers-storage-overlay-9b24fa90a02b61ee0d3fbdc333fb37739cb5b64809173c659123574893d64f34-merged.mount: Deactivated successfully.
Nov 23 04:53:19 localhost systemd[1]: var-lib-containers-storage-overlay-469c0d49cb41253ecc66100a65e8d6917f20dbbbb4690958278daa2d07f23bee-merged.mount: Deactivated successfully.
Nov 23 04:53:19 localhost podman[301173]: 2025-11-23 09:53:19.306999979 +0000 UTC m=+0.086329006 container remove 1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_grothendieck, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553)
Nov 23 04:53:19 localhost systemd[1]: libpod-conmon-1dc484c05828989a80d7cf7efc2cbdfdb04f7ebbd842c268d631b3f94bc61517.scope: Deactivated successfully.
Nov 23 04:53:19 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:19 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:19 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:19 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:19 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Nov 23 04:53:19 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:53:19 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:19 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:19 localhost ceph-mon[289043]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:53:19 localhost ceph-mon[289043]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:53:19 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:19 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:19 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:53:20 localhost podman[301266]: 
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.059787571 +0000 UTC m=+0.072498566 container create 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, version=7)
Nov 23 04:53:20 localhost systemd[1]: Started libpod-conmon-45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3.scope.
Nov 23 04:53:20 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.030169057 +0000 UTC m=+0.042880062 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.12985622 +0000 UTC m=+0.142567185 container init 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, release=553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=)
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.136680694 +0000 UTC m=+0.149391689 container start 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.137036603 +0000 UTC m=+0.149747578 container attach 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, distribution-scope=public, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:20 localhost keen_johnson[301281]: 167 167
Nov 23 04:53:20 localhost systemd[1]: libpod-45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3.scope: Deactivated successfully.
Nov 23 04:53:20 localhost podman[301266]: 2025-11-23 09:53:20.139434307 +0000 UTC m=+0.152145312 container died 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=)
Nov 23 04:53:20 localhost podman[301286]: 2025-11-23 09:53:20.230950751 +0000 UTC m=+0.078334042 container remove 45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_johnson, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, version=7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:53:20 localhost systemd[1]: var-lib-containers-storage-overlay-0aa3199a000f48d62cb15686f70a5bebbf0283b6b403dc72455888d6fc6fd82d-merged.mount: Deactivated successfully.
Nov 23 04:53:20 localhost systemd[1]: libpod-conmon-45985f58695a0666de5821800ae8500fc8ddc2684eeadfb975e17b487602bfe3.scope: Deactivated successfully.
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:20 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:20 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 04:53:20 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:20 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:20 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:53:20 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost podman[301363]: 
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.050885424 +0000 UTC m=+0.088905446 container create 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:53:21 localhost systemd[1]: Started libpod-conmon-0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b.scope.
Nov 23 04:53:21 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.017074797 +0000 UTC m=+0.055094859 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.124151359 +0000 UTC m=+0.162171381 container init 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, release=553, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7)
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.133812269 +0000 UTC m=+0.171832291 container start 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12)
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.134252901 +0000 UTC m=+0.172272923 container attach 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=553, RELEASE=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, version=7)
Nov 23 04:53:21 localhost musing_dubinsky[301378]: 167 167
Nov 23 04:53:21 localhost systemd[1]: libpod-0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b.scope: Deactivated successfully.
Nov 23 04:53:21 localhost podman[301363]: 2025-11-23 09:53:21.139688917 +0000 UTC m=+0.177708979 container died 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, version=7, name=rhceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:21 localhost systemd[1]: var-lib-containers-storage-overlay-9e05ebf01de6672ca325be7c6c210edaf275a9f74911b54ceffb2bcf8c947f1e-merged.mount: Deactivated successfully.
Nov 23 04:53:21 localhost podman[301383]: 2025-11-23 09:53:21.243846 +0000 UTC m=+0.090376695 container remove 0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dubinsky, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 23 04:53:21 localhost systemd[1]: libpod-conmon-0109366e17a29e8df83d2976a507dd719d0fc77b6fd95db6a1ad69e22454fa7b.scope: Deactivated successfully.
Nov 23 04:53:21 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:21 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:21 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 23 04:53:21 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:21 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 04:53:21 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mgr services"} : dispatch
Nov 23 04:53:21 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:21 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:21 localhost ceph-mon[289043]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:53:21 localhost ceph-mon[289043]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:21 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:21 localhost podman[301451]: 
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.877557848 +0000 UTC m=+0.052194662 container create 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:53:21 localhost systemd[1]: Started libpod-conmon-9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9.scope.
Nov 23 04:53:21 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.941221735 +0000 UTC m=+0.115858539 container init 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.955660383 +0000 UTC m=+0.130297227 container start 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, release=553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.955996191 +0000 UTC m=+0.130633025 container attach 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:53:21 localhost hungry_villani[301464]: 167 167
Nov 23 04:53:21 localhost systemd[1]: libpod-9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9.scope: Deactivated successfully.
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.857740466 +0000 UTC m=+0.032377280 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:21 localhost podman[301451]: 2025-11-23 09:53:21.959267649 +0000 UTC m=+0.133904523 container died 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:22 localhost podman[301470]: 2025-11-23 09:53:22.053410374 +0000 UTC m=+0.084125507 container remove 9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_villani, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, architecture=x86_64, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7)
Nov 23 04:53:22 localhost systemd[1]: libpod-conmon-9988521da7d97cec917e64d76bdc6c3167b1c950573550aff6d818d8c1df93c9.scope: Deactivated successfully.
Nov 23 04:53:22 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:53:22 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:22 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:53:22 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:22 localhost systemd[1]: tmp-crun.hc6u5f.mount: Deactivated successfully.
Nov 23 04:53:22 localhost systemd[1]: var-lib-containers-storage-overlay-b636ee736e5fe287f47f6cd571c9ed160993b127c3ab77d0066771ed462c4c18-merged.mount: Deactivated successfully.
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: ERROR   09:53:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: ERROR   09:53:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: ERROR   09:53:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: ERROR   09:53:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: ERROR   09:53:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:53:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:53:22 localhost ceph-mon[289043]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:53:22 localhost ceph-mon[289043]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:53:22 localhost ceph-mon[289043]: Saving service mon spec with placement label:mon
Nov 23 04:53:22 localhost ceph-mon[289043]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:53:22 localhost ceph-mon[289043]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:53:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:22 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:53:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:23 localhost ceph-mon[289043]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [DBG] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "quorum_status"} : dispatch
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e15 handle_command mon_command({"prefix": "mon rm", "name": "np0005532586"} v 0)
Nov 23 04:53:23 localhost ceph-mon[289043]: log_channel(audit) log [INF] : from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "mon rm", "name": "np0005532586"} : dispatch
Nov 23 04:53:23 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be54f0000 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Nov 23 04:53:23 localhost ceph-mon[289043]: mon.np0005532586@0(leader) e16  removed from monmap, suicide.
Nov 23 04:53:23 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Nov 23 04:53:23 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Nov 23 04:53:23 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be54f0840 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Nov 23 04:53:23 localhost podman[301585]: 2025-11-23 09:53:23.890223442 +0000 UTC m=+0.055290104 container died b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, release=553, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Nov 23 04:53:23 localhost systemd[1]: var-lib-containers-storage-overlay-e1d4f2605014da2e23fb9ab0c41f9cc240870e737345da101948519c4341da2a-merged.mount: Deactivated successfully.
Nov 23 04:53:23 localhost podman[301585]: 2025-11-23 09:53:23.929745102 +0000 UTC m=+0.094811734 container remove b3b17451ea70c808aff158ac69bfc210563088228e1e2b4a6b30b582532275f7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, release=553)
Nov 23 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:53:24 localhost systemd[1]: tmp-crun.4nvk7O.mount: Deactivated successfully.
Nov 23 04:53:24 localhost podman[301750]: 2025-11-23 09:53:24.476670252 +0000 UTC m=+0.122241630 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 04:53:24 localhost podman[301750]: 2025-11-23 09:53:24.561003134 +0000 UTC m=+0.206574502 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter)
Nov 23 04:53:24 localhost podman[301800]: 2025-11-23 09:53:24.570621871 +0000 UTC m=+0.132454153 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:53:24 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:53:24 localhost podman[301800]: 2025-11-23 09:53:24.604980404 +0000 UTC m=+0.166812696 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:53:24 localhost podman[301751]: 2025-11-23 09:53:24.61271221 +0000 UTC m=+0.248621709 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:53:24 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:53:24 localhost podman[301751]: 2025-11-23 09:53:24.627843737 +0000 UTC m=+0.263753246 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 23 04:53:24 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:53:24 localhost systemd[1]: tmp-crun.PgIzqW.mount: Deactivated successfully.
Nov 23 04:53:25 localhost systemd[1]: ceph-46550e70-79cb-5f55-bf6d-1204b97e083b@mon.np0005532586.service: Deactivated successfully.
Nov 23 04:53:25 localhost systemd[1]: Stopped Ceph mon.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 04:53:25 localhost systemd[1]: ceph-46550e70-79cb-5f55-bf6d-1204b97e083b@mon.np0005532586.service: Consumed 11.780s CPU time.
Nov 23 04:53:25 localhost systemd[1]: Reloading.
Nov 23 04:53:25 localhost systemd-rc-local-generator[302055]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:53:25 localhost systemd-sysv-generator[302061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:29 localhost sshd[302159]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:53:29 localhost sshd[302160]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:53:34 localhost podman[302162]: 2025-11-23 09:53:34.179833763 +0000 UTC m=+0.084694893 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:53:34 localhost podman[302162]: 2025-11-23 09:53:34.189984665 +0000 UTC m=+0.094845815 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:53:34 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:53:34 localhost podman[302161]: 2025-11-23 09:53:34.233314127 +0000 UTC m=+0.138507946 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 23 04:53:34 localhost podman[302161]: 2025-11-23 09:53:34.267923115 +0000 UTC m=+0.173116974 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 23 04:53:34 localhost systemd[1]: tmp-crun.bdBY6m.mount: Deactivated successfully.
Nov 23 04:53:34 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:53:34 localhost podman[302163]: 2025-11-23 09:53:34.287058019 +0000 UTC m=+0.186446273 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:53:34 localhost podman[302163]: 2025-11-23 09:53:34.350925522 +0000 UTC m=+0.250313776 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 04:53:34 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:53:35 localhost podman[302279]: 
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.840770482 +0000 UTC m=+0.069831533 container create 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, architecture=x86_64, distribution-scope=public, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=)
Nov 23 04:53:35 localhost systemd[1]: Started libpod-conmon-06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a.scope.
Nov 23 04:53:35 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.815453813 +0000 UTC m=+0.044514884 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.917314006 +0000 UTC m=+0.146375047 container init 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:53:35 localhost systemd[1]: tmp-crun.IP84zQ.mount: Deactivated successfully.
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.930699075 +0000 UTC m=+0.159760116 container start 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph)
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.931209679 +0000 UTC m=+0.160270760 container attach 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Nov 23 04:53:35 localhost suspicious_aryabhata[302295]: 167 167
Nov 23 04:53:35 localhost systemd[1]: libpod-06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a.scope: Deactivated successfully.
Nov 23 04:53:35 localhost podman[302279]: 2025-11-23 09:53:35.935149914 +0000 UTC m=+0.164210955 container died 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vcs-type=git)
Nov 23 04:53:36 localhost podman[302300]: 2025-11-23 09:53:36.035233389 +0000 UTC m=+0.088321011 container remove 06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_aryabhata, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=)
Nov 23 04:53:36 localhost systemd[1]: libpod-conmon-06012974ed9d72413827dd5e10b6f135e59ef752869bdf2837ce27cb09f0ef6a.scope: Deactivated successfully.
Nov 23 04:53:36 localhost sshd[302369]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:53:36 localhost podman[302370]: 
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.756599627 +0000 UTC m=+0.081783074 container create f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public)
Nov 23 04:53:36 localhost systemd[1]: Started libpod-conmon-f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f.scope.
Nov 23 04:53:36 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.816675619 +0000 UTC m=+0.141859066 container init f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, name=rhceph, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, ceph=True, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.725040151 +0000 UTC m=+0.050223618 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.826376819 +0000 UTC m=+0.151560256 container start f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc.)
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.826620305 +0000 UTC m=+0.151803782 container attach f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, ceph=True, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main)
Nov 23 04:53:36 localhost amazing_neumann[302385]: 167 167
Nov 23 04:53:36 localhost systemd[1]: libpod-f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f.scope: Deactivated successfully.
Nov 23 04:53:36 localhost podman[302370]: 2025-11-23 09:53:36.828920377 +0000 UTC m=+0.154103814 container died f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, name=rhceph, version=7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main)
Nov 23 04:53:36 localhost systemd[1]: var-lib-containers-storage-overlay-b637eaa9c98fc45dfc5d97a24435300299957936263e0051f1cc64ede9ee704b-merged.mount: Deactivated successfully.
Nov 23 04:53:36 localhost systemd[1]: tmp-crun.k41f4w.mount: Deactivated successfully.
Nov 23 04:53:36 localhost systemd[1]: var-lib-containers-storage-overlay-2dcb6fb585433c956793e769f72053200ea8c5fe01f93314a7135a4e68d06f0c-merged.mount: Deactivated successfully.
Nov 23 04:53:36 localhost podman[302390]: 2025-11-23 09:53:36.906922459 +0000 UTC m=+0.072854505 container remove f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_neumann, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:53:36 localhost systemd[1]: libpod-conmon-f73256363d2c9ffbe14733974c388f8f4a8fd6adf2f6442b2f51531e1f79053f.scope: Deactivated successfully.
Nov 23 04:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:53:37 localhost podman[302485]: 2025-11-23 09:53:37.384626443 +0000 UTC m=+0.091929067 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 04:53:37 localhost podman[302485]: 2025-11-23 09:53:37.418308256 +0000 UTC m=+0.125610880 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:53:37 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:53:37 localhost podman[302559]: 
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.797894897 +0000 UTC m=+0.078165467 container create 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:53:37 localhost systemd[1]: Started libpod-conmon-1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef.scope.
Nov 23 04:53:37 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.764871521 +0000 UTC m=+0.045142091 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.870598157 +0000 UTC m=+0.150868757 container init 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, release=553, build-date=2025-09-24T08:57:55, version=7, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.881916041 +0000 UTC m=+0.162186611 container start 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.882201928 +0000 UTC m=+0.162472498 container attach 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, ceph=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_CLEAN=True)
Nov 23 04:53:37 localhost busy_mahavira[302574]: 167 167
Nov 23 04:53:37 localhost systemd[1]: libpod-1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef.scope: Deactivated successfully.
Nov 23 04:53:37 localhost podman[302559]: 2025-11-23 09:53:37.888323552 +0000 UTC m=+0.168594142 container died 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, version=7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Nov 23 04:53:37 localhost podman[302581]: 2025-11-23 09:53:37.993592576 +0000 UTC m=+0.096381565 container remove 1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553)
Nov 23 04:53:37 localhost systemd[1]: libpod-conmon-1a870176f0c19cbde7837fcb08c13dd616eccc4f31112f238b08e2563ea12fef.scope: Deactivated successfully.
Nov 23 04:53:38 localhost podman[302598]: 
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.114798857 +0000 UTC m=+0.078346862 container create 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, build-date=2025-09-24T08:57:55, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12)
Nov 23 04:53:38 localhost systemd[1]: Started libpod-conmon-7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4.scope.
Nov 23 04:53:38 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25982fa365e1490861c3caa1b1f21c5a0dc8db71dcebbcf5783f10acf40df2e/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25982fa365e1490861c3caa1b1f21c5a0dc8db71dcebbcf5783f10acf40df2e/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25982fa365e1490861c3caa1b1f21c5a0dc8db71dcebbcf5783f10acf40df2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25982fa365e1490861c3caa1b1f21c5a0dc8db71dcebbcf5783f10acf40df2e/merged/var/lib/ceph/mon/ceph-np0005532586 supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.17826876 +0000 UTC m=+0.141816765 container init 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main)
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.083506548 +0000 UTC m=+0.047054633 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.187363854 +0000 UTC m=+0.150911849 container start 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, version=7, build-date=2025-09-24T08:57:55)
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.187771264 +0000 UTC m=+0.151319309 container attach 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:53:38 localhost systemd[1]: libpod-7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4.scope: Deactivated successfully.
Nov 23 04:53:38 localhost podman[302598]: 2025-11-23 09:53:38.286867723 +0000 UTC m=+0.250415738 container died 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True)
Nov 23 04:53:38 localhost podman[302639]: 2025-11-23 09:53:38.399930385 +0000 UTC m=+0.093494358 container remove 7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_visvesvaraya, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, io.buildah.version=1.33.12, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc.)
Nov 23 04:53:38 localhost systemd[1]: libpod-conmon-7757fed4b50b72bd35c80a291d9a93b0e57d475ac37e7dfc1148b6df081ea4d4.scope: Deactivated successfully.
Nov 23 04:53:38 localhost systemd[1]: Reloading.
Nov 23 04:53:38 localhost systemd-sysv-generator[302684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:53:38 localhost systemd-rc-local-generator[302677]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:38 localhost systemd[1]: var-lib-containers-storage-overlay-001622493ff134ebe4e10bc9d9d24e157f7d23a72102a61a782da984bca561af-merged.mount: Deactivated successfully.
Nov 23 04:53:38 localhost systemd[1]: Reloading.
Nov 23 04:53:38 localhost systemd-rc-local-generator[302725]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 23 04:53:39 localhost systemd-sysv-generator[302729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Nov 23 04:53:39 localhost systemd[1]: Starting Ceph mon.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b...
Nov 23 04:53:39 localhost podman[302784]: 
Nov 23 04:53:39 localhost podman[302784]: 2025-11-23 09:53:39.627664086 +0000 UTC m=+0.078465206 container create 740ff565d6af64dec6008da55adfa27dc079b3ac2e6e2069ba337d49becb8ee6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, ceph=True, name=rhceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph)
Nov 23 04:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8ce23513e9a4c169b4e0b1ba076559fe78286e2676b5d63563946e31aa4f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8ce23513e9a4c169b4e0b1ba076559fe78286e2676b5d63563946e31aa4f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8ce23513e9a4c169b4e0b1ba076559fe78286e2676b5d63563946e31aa4f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8ce23513e9a4c169b4e0b1ba076559fe78286e2676b5d63563946e31aa4f64/merged/var/lib/ceph/mon/ceph-np0005532586 supports timestamps until 2038 (0x7fffffff)
Nov 23 04:53:39 localhost podman[302784]: 2025-11-23 09:53:39.686869334 +0000 UTC m=+0.137670454 container init 740ff565d6af64dec6008da55adfa27dc079b3ac2e6e2069ba337d49becb8ee6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True)
Nov 23 04:53:39 localhost podman[302784]: 2025-11-23 09:53:39.596265784 +0000 UTC m=+0.047066934 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:39 localhost podman[302784]: 2025-11-23 09:53:39.695991579 +0000 UTC m=+0.146792709 container start 740ff565d6af64dec6008da55adfa27dc079b3ac2e6e2069ba337d49becb8ee6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mon-np0005532586, GIT_BRANCH=main, name=rhceph, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:53:39 localhost bash[302784]: 740ff565d6af64dec6008da55adfa27dc079b3ac2e6e2069ba337d49becb8ee6
Nov 23 04:53:39 localhost systemd[1]: Started Ceph mon.np0005532586 for 46550e70-79cb-5f55-bf6d-1204b97e083b.
Nov 23 04:53:39 localhost ceph-mon[302802]: set uid:gid to 167:167 (ceph:ceph)
Nov 23 04:53:39 localhost ceph-mon[302802]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Nov 23 04:53:39 localhost ceph-mon[302802]: pidfile_write: ignore empty --pid-file
Nov 23 04:53:39 localhost ceph-mon[302802]: load: jerasure load: lrc 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: RocksDB version: 7.9.2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Git sha 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Compile date 2025-09-23 00:00:00
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: DB SUMMARY
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: DB Session ID:  QS3FPCEL51SUB5WPWX0A
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: CURRENT file:  CURRENT
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: IDENTITY file:  IDENTITY
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005532586/store.db dir, Total Num: 0, files: 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005532586/store.db: 000004.log size: 636 ; 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                         Options.error_if_exists: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.create_if_missing: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                         Options.paranoid_checks: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                                     Options.env: 0x5569c75b69e0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                                Options.info_log: 0x5569c8724d20
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.max_file_opening_threads: 16
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                              Options.statistics: (nil)
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                               Options.use_fsync: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.max_log_file_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                         Options.allow_fallocate: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.use_direct_reads: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.create_missing_column_families: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                              Options.db_log_dir: 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                                 Options.wal_dir: 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.advise_random_on_open: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                    Options.write_buffer_manager: 0x5569c8735540
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                            Options.rate_limiter: (nil)
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.unordered_write: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                               Options.row_cache: None
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                              Options.wal_filter: None
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.allow_ingest_behind: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.two_write_queues: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.manual_wal_flush: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.wal_compression: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.atomic_flush: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.log_readahead_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.allow_data_in_errors: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.db_host_id: __hostname__
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.max_background_jobs: 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.max_background_compactions: -1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.max_subcompactions: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.max_total_wal_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                          Options.max_open_files: -1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                          Options.bytes_per_sync: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:       Options.compaction_readahead_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.max_background_flushes: -1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Compression algorithms supported:
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kZSTD supported: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kXpressCompression supported: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kBZip2Compression supported: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kLZ4Compression supported: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kZlibCompression supported: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: #011kSnappyCompression supported: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005532586/store.db/MANIFEST-000005
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:           Options.merge_operator: 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:        Options.compaction_filter: None
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:        Options.compaction_filter_factory: None
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:  Options.sst_partitioner_factory: None
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5569c8724980)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5569c8721350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:        Options.write_buffer_size: 33554432
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:  Options.max_write_buffer_number: 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.compression: NoCompression
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:       Options.prefix_extractor: nullptr
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.num_levels: 7
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.compression_opts.level: 32767
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:               Options.compression_opts.strategy: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                  Options.compression_opts.enabled: false
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.arena_block_size: 1048576
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.disable_auto_compactions: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.table_properties_collectors: 
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.inplace_update_support: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                           Options.bloom_locality: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                    Options.max_successive_merges: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.paranoid_file_checks: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.force_consistency_checks: 1
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.report_bg_io_stats: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                               Options.ttl: 2592000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                       Options.enable_blob_files: false
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                           Options.min_blob_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                          Options.blob_file_size: 268435456
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb:                Options.blob_file_starting_level: 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005532586/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 30f02bec-0087-464e-96d9-108a203904da
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891619751018, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891619753504, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891619753636, "job": 1, "event": "recovery_finished"}
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5569c8748e00
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: DB pointer 0x5569c883e000
Nov 23 04:53:39 localhost ceph-mon[302802]: mon.np0005532586 does not exist in monmap, will attempt to join an existing cluster
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 04:53:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.72 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.72 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5569c8721350#2 capacity: 512.00 MB usage: 0.98 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.77 KB,0.000146031%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 04:53:39 localhost ceph-mon[302802]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0]
Nov 23 04:53:39 localhost ceph-mon[302802]: starting mon.np0005532586 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005532586 fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:53:39 localhost ceph-mon[302802]: mon.np0005532586@-1(???) e0 preinit fsid 46550e70-79cb-5f55-bf6d-1204b97e083b
Nov 23 04:53:39 localhost podman[302834]: 
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.860582783 +0000 UTC m=+0.077523010 container create 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Nov 23 04:53:39 localhost systemd[1]: Started libpod-conmon-91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e.scope.
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.82952135 +0000 UTC m=+0.046461547 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:39 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.961903111 +0000 UTC m=+0.178843288 container init 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7)
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.975217258 +0000 UTC m=+0.192157435 container start 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph)
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.978974429 +0000 UTC m=+0.195914646 container attach 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, distribution-scope=public, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:53:39 localhost infallible_banach[302860]: 167 167
Nov 23 04:53:39 localhost systemd[1]: libpod-91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e.scope: Deactivated successfully.
Nov 23 04:53:39 localhost podman[302834]: 2025-11-23 09:53:39.982040141 +0000 UTC m=+0.198980328 container died 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.buildah.version=1.33.12, release=553, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph)
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing) e16 sync_obtain_latest_monmap
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing) e16 sync_obtain_latest_monmap obtained monmap e16
Nov 23 04:53:40 localhost podman[302865]: 2025-11-23 09:53:40.075383875 +0000 UTC m=+0.081308242 container remove 91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_banach, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.33.12, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=)
Nov 23 04:53:40 localhost systemd[1]: libpod-conmon-91838b10047be0714a361b50892856c7983fe2771bfe4443d205e6ecabc4e88e.scope: Deactivated successfully.
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).mds e16 new map
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-23T08:00:26.486221+0000#012modified#0112025-11-23T09:47:19.846415+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01179#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26392}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26392 members: 26392#012[mds.mds.np0005532586.mfohsb{0:26392} state up:active seq 12 addr [v2:172.18.0.108:6808/2718449296,v1:172.18.0.108:6809/2718449296] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005532585.jcltnl{-1:17133} state up:standby seq 1 addr [v2:172.18.0.107:6808/563301557,v1:172.18.0.107:6809/563301557] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005532584.aoxjmw{-1:17139} state up:standby seq 1 addr [v2:172.18.0.106:6808/2261302276,v1:172.18.0.106:6809/2261302276] compat {c=[1],r=[1],i=[17ff]}]
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing key for mgr.np0005532583.orhywt
Nov 23 04:53:40 localhost ceph-mon[302802]: Safe to remove mon.np0005532583: new quorum should be ['np0005532586', 'np0005532584', 'np0005532585'] (from ['np0005532586', 'np0005532584', 'np0005532585'])
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing monitor np0005532583 from monmap...
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing daemon mon.np0005532583 from np0005532583.localdomain -- ports []
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586 calling monitor election
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532584 calling monitor election
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586 is new leader, mons np0005532586,np0005532584,np0005532585 in quorum (ranks 0,1,2)
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532585 calling monitor election
Nov 23 04:53:40 localhost ceph-mon[302802]: overall HEALTH_OK
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing np0005532583.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing np0005532583.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532583.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532583 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532583 on np0005532583.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Added label _no_schedule to host np0005532583.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005532583.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain"}]': finished
Nov 23 04:53:40 localhost ceph-mon[302802]: Removed host np0005532583.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: Saving service mon spec with placement label:mon
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: Remove daemons mon.np0005532586
Nov 23 04:53:40 localhost ceph-mon[302802]: Safe to remove mon.np0005532586: new quorum should be ['np0005532584', 'np0005532585'] (from ['np0005532584', 'np0005532585'])
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing monitor np0005532586 from monmap...
Nov 23 04:53:40 localhost ceph-mon[302802]: Removing daemon mon.np0005532586 from np0005532586.localdomain -- ports []
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532584 calling monitor election
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532585 calling monitor election
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532584 is new leader, mons np0005532584,np0005532585 in quorum (ranks 0,1)
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: overall HEALTH_OK
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:53:40 localhost ceph-mon[302802]: Deploying daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:53:40 localhost ceph-mon[302802]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:40 localhost ceph-mon[302802]: mon.np0005532586@-1(synchronizing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 3
Nov 23 04:53:40 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x557be54f09a0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Nov 23 04:53:40 localhost systemd[1]: tmp-crun.qtwUIG.mount: Deactivated successfully.
Nov 23 04:53:40 localhost systemd[1]: var-lib-containers-storage-overlay-449ce19216238f91fd96e5116d8184305b12790dcb35178c4fded2d5f1886af4-merged.mount: Deactivated successfully.
Nov 23 04:53:40 localhost podman[302943]: 
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.907111454 +0000 UTC m=+0.074779077 container create 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, name=rhceph, release=553, build-date=2025-09-24T08:57:55)
Nov 23 04:53:40 localhost systemd[1]: Started libpod-conmon-0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536.scope.
Nov 23 04:53:40 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.871834807 +0000 UTC m=+0.039502420 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.975441416 +0000 UTC m=+0.143109029 container init 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, architecture=x86_64, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.984781947 +0000 UTC m=+0.152449560 container start 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True)
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.985100156 +0000 UTC m=+0.152767789 container attach 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Nov 23 04:53:40 localhost exciting_spence[302959]: 167 167
Nov 23 04:53:40 localhost systemd[1]: libpod-0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536.scope: Deactivated successfully.
Nov 23 04:53:40 localhost podman[302943]: 2025-11-23 09:53:40.988424485 +0000 UTC m=+0.156092128 container died 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, release=553)
Nov 23 04:53:41 localhost podman[302964]: 2025-11-23 09:53:41.088326185 +0000 UTC m=+0.086557223 container remove 0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_spence, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=553)
Nov 23 04:53:41 localhost systemd[1]: libpod-conmon-0f0579b5c4d46a68941aabf7de6612b49e387853b007a44de8a72c718bda2536.scope: Deactivated successfully.
Nov 23 04:53:41 localhost podman[240144]: time="2025-11-23T09:53:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:53:41 localhost podman[240144]: @ - - [23/Nov/2025:09:53:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:53:41 localhost podman[240144]: @ - - [23/Nov/2025:09:53:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19170 "" "Go-http-client/1.1"
Nov 23 04:53:41 localhost systemd[1]: var-lib-containers-storage-overlay-bef97b3f1467e2e78cb95a9c119a3f12d4f01faf293eea031713f0f20c3a5180-merged.mount: Deactivated successfully.
Nov 23 04:53:42 localhost ceph-mon[302802]: mon.np0005532586@-1(probing) e17  my rank is now 2 (was -1)
Nov 23 04:53:42 localhost ceph-mon[302802]: log_channel(cluster) log [INF] : mon.np0005532586 calling monitor election
Nov 23 04:53:42 localhost ceph-mon[302802]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 23 04:53:42 localhost ceph-mon[302802]: mon.np0005532586@2(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532586@2(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 23 04:53:45 localhost ceph-mon[302802]: mgrc update_daemon_metadata mon.np0005532586 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005532586.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005532586.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116604,os=Linux}
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532584 calling monitor election
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532585 calling monitor election
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532586 calling monitor election
Nov 23 04:53:45 localhost ceph-mon[302802]: mon.np0005532584 is new leader, mons np0005532584,np0005532585,np0005532586 in quorum (ranks 0,1,2)
Nov 23 04:53:45 localhost ceph-mon[302802]: overall HEALTH_OK
Nov 23 04:53:45 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:45 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:46 localhost podman[303035]: 
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.20965149 +0000 UTC m=+0.064179953 container create eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.)
Nov 23 04:53:46 localhost systemd[1]: Started libpod-conmon-eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e.scope.
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.178411132 +0000 UTC m=+0.032939605 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:53:46 localhost systemd[1]: Started libcrun container.
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.291458874 +0000 UTC m=+0.145987327 container init eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, name=rhceph, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12)
Nov 23 04:53:46 localhost pedantic_volhard[303050]: 167 167
Nov 23 04:53:46 localhost systemd[1]: libpod-eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e.scope: Deactivated successfully.
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.30470599 +0000 UTC m=+0.159234443 container start eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, version=7)
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.305122991 +0000 UTC m=+0.159651494 container attach eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph)
Nov 23 04:53:46 localhost podman[303035]: 2025-11-23 09:53:46.308262685 +0000 UTC m=+0.162791208 container died eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.expose-services=, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:53:46 localhost podman[303055]: 2025-11-23 09:53:46.404338571 +0000 UTC m=+0.087116757 container remove eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_volhard, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, build-date=2025-09-24T08:57:55, release=553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.expose-services=)
Nov 23 04:53:46 localhost systemd[1]: libpod-conmon-eb93f6788397811547cb5bf5c84a33e8499fe95daa36a9c65d6c57ef76a5508e.scope: Deactivated successfully.
Nov 23 04:53:46 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532586.thmvqb (monmap changed)...
Nov 23 04:53:46 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:46 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532586.thmvqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:46 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532586.thmvqb on np0005532586.localdomain
Nov 23 04:53:46 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:46 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:47 localhost systemd[1]: var-lib-containers-storage-overlay-e5ae481392e0adc25aa01c8466983229fe1c347ecc3b3e6fc97584be2786a048-merged.mount: Deactivated successfully.
Nov 23 04:53:47 localhost podman[303180]: 2025-11-23 09:53:47.556777743 +0000 UTC m=+0.133626045 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.component=rhceph-container)
Nov 23 04:53:47 localhost podman[303180]: 2025-11-23 09:53:47.690104659 +0000 UTC m=+0.266952921 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.expose-services=, version=7, architecture=x86_64, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git)
Nov 23 04:53:49 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:49 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:49 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:53:49 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:49 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:49 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:53:50 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:50 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:50 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:50 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532584.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:50 localhost nova_compute[281613]: 2025-11-23 09:53:50.811 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:53:51 localhost ceph-mon[302802]: Reconfiguring crash.np0005532584 (monmap changed)...
Nov 23 04:53:51 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532584 on np0005532584.localdomain
Nov 23 04:53:51 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:51 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:51 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:51 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: ERROR   09:53:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: ERROR   09:53:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: ERROR   09:53:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: ERROR   09:53:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: ERROR   09:53:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:53:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:53:52 localhost ceph-mon[302802]: Reconfiguring osd.2 (monmap changed)...
Nov 23 04:53:52 localhost ceph-mon[302802]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:53:52 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:52 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:52 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.461934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632462029, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12632, "num_deletes": 254, "total_data_size": 20711923, "memory_usage": 21318160, "flush_reason": "Manual Compaction"}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632527625, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 17228352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12637, "table_properties": {"data_size": 17157377, "index_size": 39509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 323075, "raw_average_key_size": 26, "raw_value_size": 16948714, "raw_average_value_size": 1394, "num_data_blocks": 1514, "num_entries": 12154, "num_filter_entries": 12154, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891620, "oldest_key_time": 1763891620, "file_creation_time": 1763891632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 65799 microseconds, and 32934 cpu microseconds.
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.527735) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 17228352 bytes OK
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.527790) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.530087) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.530108) EVENT_LOG_v1 {"time_micros": 1763891632530101, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.530126) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20624469, prev total WAL file size 20624469, number of live WAL files 2.
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.533741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end)
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(16MB) 8(1762B)]
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632533840, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 17230114, "oldest_snapshot_seqno": -1}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11904 keys, 17224816 bytes, temperature: kUnknown
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632614488, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 17224816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17154514, "index_size": 39481, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 318306, "raw_average_key_size": 26, "raw_value_size": 16949119, "raw_average_value_size": 1423, "num_data_blocks": 1513, "num_entries": 11904, "num_filter_entries": 11904, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.614872) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 17224816 bytes
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.616668) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.2 rd, 213.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(16.4, 0.0 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 12159, records dropped: 255 output_compression: NoCompression
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.616699) EVENT_LOG_v1 {"time_micros": 1763891632616685, "job": 4, "event": "compaction_finished", "compaction_time_micros": 80807, "compaction_time_cpu_micros": 45466, "output_level": 6, "num_output_files": 1, "total_output_size": 17224816, "num_input_records": 12159, "num_output_records": 11904, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632619219, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891632619285, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 23 04:53:52 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:53:52.533662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:53:53 localhost ceph-mon[302802]: Reconfiguring osd.5 (monmap changed)...
Nov 23 04:53:53 localhost ceph-mon[302802]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:53:53 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:53 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:53 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:53 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532584.aoxjmw (monmap changed)...
Nov 23 04:53:53 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532584.aoxjmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:53 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532584.aoxjmw on np0005532584.localdomain
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:54 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532584.naxwxy (monmap changed)...
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532584.naxwxy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:54 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532584.naxwxy on np0005532584.localdomain
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:54 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532585.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:53:55 localhost podman[303640]: 2025-11-23 09:53:55.18500982 +0000 UTC m=+0.089273156 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:53:55 localhost podman[303640]: 2025-11-23 09:53:55.225929077 +0000 UTC m=+0.130192373 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:53:55 localhost podman[303639]: 2025-11-23 09:53:55.235130114 +0000 UTC m=+0.139453061 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:53:55 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:53:55 localhost podman[303639]: 2025-11-23 09:53:55.253228779 +0000 UTC m=+0.157551776 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 04:53:55 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:53:55 localhost podman[303641]: 2025-11-23 09:53:55.342816732 +0000 UTC m=+0.243415160 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:53:55 localhost podman[303641]: 2025-11-23 09:53:55.354221158 +0000 UTC m=+0.254819566 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:53:55 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:53:55 localhost ceph-mon[302802]: Reconfiguring crash.np0005532585 (monmap changed)...
Nov 23 04:53:55 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532585 on np0005532585.localdomain
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:55 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: Reconfig service osd.default_drive_group
Nov 23 04:53:56 localhost ceph-mon[302802]: Reconfiguring osd.0 (monmap changed)...
Nov 23 04:53:56 localhost ceph-mon[302802]: Reconfiguring daemon osd.0 on np0005532585.localdomain
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:56 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Nov 23 04:53:58 localhost ceph-mon[302802]: Reconfiguring osd.3 (monmap changed)...
Nov 23 04:53:58 localhost ceph-mon[302802]: Reconfiguring daemon osd.3 on np0005532585.localdomain
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:58 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532585.jcltnl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:53:59 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532585.jcltnl (monmap changed)...
Nov 23 04:53:59 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532585.jcltnl on np0005532585.localdomain
Nov 23 04:53:59 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:59 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:53:59 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:53:59 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005532585.gzafiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Nov 23 04:54:00 localhost podman[303754]: 
Nov 23 04:54:00 localhost ceph-mon[302802]: Reconfiguring mgr.np0005532585.gzafiw (monmap changed)...
Nov 23 04:54:00 localhost ceph-mon[302802]: Reconfiguring daemon mgr.np0005532585.gzafiw on np0005532585.localdomain
Nov 23 04:54:00 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:54:00 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:54:00 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005532586.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.214825681 +0000 UTC m=+0.081192659 container create b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.33.12)
Nov 23 04:54:00 localhost systemd[1]: Started libpod-conmon-b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599.scope.
Nov 23 04:54:00 localhost systemd[1]: Started libcrun container.
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.180459099 +0000 UTC m=+0.046826077 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.297089898 +0000 UTC m=+0.163456876 container init b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-type=git)
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.308755691 +0000 UTC m=+0.175122669 container start b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main)
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.30910453 +0000 UTC m=+0.175471518 container attach b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=7, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:54:00 localhost relaxed_bell[303770]: 167 167
Nov 23 04:54:00 localhost systemd[1]: libpod-b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599.scope: Deactivated successfully.
Nov 23 04:54:00 localhost podman[303754]: 2025-11-23 09:54:00.313775245 +0000 UTC m=+0.180142243 container died b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., RELEASE=main, version=7, architecture=x86_64, distribution-scope=public, name=rhceph)
Nov 23 04:54:00 localhost podman[303775]: 2025-11-23 09:54:00.409788501 +0000 UTC m=+0.087574770 container remove b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_bell, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=553, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git)
Nov 23 04:54:00 localhost systemd[1]: libpod-conmon-b7f4a9156b1e89b482bfa32e47273164a25d7b6002df93737a333f0c28010599.scope: Deactivated successfully.
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 e86: 6 total, 6 up, 6 in
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr handle_mgr_map Activating!
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr handle_mgr_map I am now activating
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532584"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mon metadata", "id": "np0005532584"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532585"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mon metadata", "id": "np0005532585"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005532586"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mon metadata", "id": "np0005532586"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532585.jcltnl"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mds metadata", "who": "mds.np0005532585.jcltnl"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).mds e16 all = 0
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532584.aoxjmw"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mds metadata", "who": "mds.np0005532584.aoxjmw"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).mds e16 all = 0
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005532586.mfohsb"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mds metadata", "who": "mds.np0005532586.mfohsb"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).mds e16 all = 0
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532586.thmvqb", "id": "np0005532586.thmvqb"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mgr metadata", "who": "np0005532586.thmvqb", "id": "np0005532586.thmvqb"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532583.orhywt", "id": "np0005532583.orhywt"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mgr metadata", "who": "np0005532583.orhywt", "id": "np0005532583.orhywt"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532584.naxwxy", "id": "np0005532584.naxwxy"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mgr metadata", "who": "np0005532584.naxwxy", "id": "np0005532584.naxwxy"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mds metadata"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).mds e16 all = 1
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd metadata"} : dispatch
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mon metadata"} : dispatch
Nov 23 04:54:00 localhost ceph-mgr[287623]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: balancer
Nov 23 04:54:00 localhost ceph-mgr[287623]: [balancer INFO root] Starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: [balancer INFO root] Optimize plan auto_2025-11-23_09:54:00
Nov 23 04:54:00 localhost ceph-mgr[287623]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 04:54:00 localhost ceph-mgr[287623]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 23 04:54:00 localhost ceph-mgr[287623]: [cephadm WARNING root] removing stray HostCache host record np0005532583.localdomain.devices.0
Nov 23 04:54:00 localhost ceph-mgr[287623]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005532583.localdomain.devices.0
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:00 localhost systemd[1]: session-67.scope: Deactivated successfully.
Nov 23 04:54:00 localhost systemd[1]: session-67.scope: Consumed 28.677s CPU time.
Nov 23 04:54:00 localhost systemd-logind[761]: Session 67 logged out. Waiting for processes to exit.
Nov 23 04:54:00 localhost systemd-logind[761]: Removed session 67.
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} v 0)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: cephadm
Nov 23 04:54:00 localhost ceph-mgr[287623]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: crash
Nov 23 04:54:00 localhost ceph-mgr[287623]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: devicehealth
Nov 23 04:54:00 localhost ceph-mgr[287623]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: iostat
Nov 23 04:54:00 localhost ceph-mgr[287623]: [devicehealth INFO root] Starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: nfs
Nov 23 04:54:00 localhost ceph-mgr[287623]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: orchestrator
Nov 23 04:54:00 localhost ceph-mgr[287623]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: pg_autoscaler
Nov 23 04:54:00 localhost ceph-mgr[287623]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: progress
Nov 23 04:54:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: [progress INFO root] Loading...
Nov 23 04:54:00 localhost ceph-mgr[287623]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f6ed40d4940>, <progress.module.GhostEvent object at 0x7f6ed40d4be0>, <progress.module.GhostEvent object at 0x7f6ed40d4c10>, <progress.module.GhostEvent object at 0x7f6ed40d4c40>, <progress.module.GhostEvent object at 0x7f6ed40d4c70>, <progress.module.GhostEvent object at 0x7f6ed40d4ca0>, <progress.module.GhostEvent object at 0x7f6ed40d4cd0>, <progress.module.GhostEvent object at 0x7f6ed40d4d00>, <progress.module.GhostEvent object at 0x7f6ed40d4d30>, <progress.module.GhostEvent object at 0x7f6ed40d4d60>, <progress.module.GhostEvent object at 0x7f6ed40d4d90>, <progress.module.GhostEvent object at 0x7f6ed40d4dc0>, <progress.module.GhostEvent object at 0x7f6ed40d4df0>, <progress.module.GhostEvent object at 0x7f6ed40d4e20>, <progress.module.GhostEvent object at 0x7f6ed40d4e50>, <progress.module.GhostEvent object at 0x7f6ed40d4e80>, <progress.module.GhostEvent object at 0x7f6ed40d4eb0>, <progress.module.GhostEvent object at 0x7f6ed40d4ee0>, <progress.module.GhostEvent object at 0x7f6ed40d4f10>, <progress.module.GhostEvent object at 0x7f6ed40d4f40>, <progress.module.GhostEvent object at 0x7f6ed40d4f70>, <progress.module.GhostEvent object at 0x7f6ed40d4fa0>, <progress.module.GhostEvent object at 0x7f6ed40d4fd0>, <progress.module.GhostEvent object at 0x7f6ed285f040>, <progress.module.GhostEvent object at 0x7f6ed285f070>, <progress.module.GhostEvent object at 0x7f6ed285f0a0>, <progress.module.GhostEvent object at 0x7f6ed285f0d0>, <progress.module.GhostEvent object at 0x7f6ed285f100>, <progress.module.GhostEvent object at 0x7f6ed285f130>, <progress.module.GhostEvent object at 0x7f6ed285f160>, <progress.module.GhostEvent object at 0x7f6ed285f190>, <progress.module.GhostEvent object at 0x7f6ed285f1c0>, <progress.module.GhostEvent object at 0x7f6ed285f1f0>, <progress.module.GhostEvent object at 0x7f6ed285f220>, <progress.module.GhostEvent object at 0x7f6ed285f250>, <progress.module.GhostEvent object at 0x7f6ed285f280>, <progress.module.GhostEvent object at 0x7f6ed285f2b0>, <progress.module.GhostEvent object at 0x7f6ed285f2e0>, <progress.module.GhostEvent object at 0x7f6ed285f310>, <progress.module.GhostEvent object at 0x7f6ed285f340>, <progress.module.GhostEvent object at 0x7f6ed285f370>, <progress.module.GhostEvent object at 0x7f6ed285f3a0>, <progress.module.GhostEvent object at 0x7f6ed285f3d0>, <progress.module.GhostEvent object at 0x7f6ed285f400>, <progress.module.GhostEvent object at 0x7f6ed285f430>, <progress.module.GhostEvent object at 0x7f6ed285f460>, <progress.module.GhostEvent object at 0x7f6ed285f490>, <progress.module.GhostEvent object at 0x7f6ed285f4c0>, <progress.module.GhostEvent object at 0x7f6ed285f4f0>, <progress.module.GhostEvent object at 0x7f6ed285f520>] historic events
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] recovery thread starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] starting setup
Nov 23 04:54:00 localhost ceph-mgr[287623]: [progress INFO root] Loaded OSDMap, ready.
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: rbd_support
Nov 23 04:54:00 localhost ceph-mgr[287623]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: restful
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} v 0)
Nov 23 04:54:00 localhost ceph-mgr[287623]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: status
Nov 23 04:54:00 localhost ceph-mgr[287623]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: telemetry
Nov 23 04:54:00 localhost ceph-mgr[287623]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [restful INFO root] server_addr: :: server_port: 8003
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] PerfHandler: starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [restful WARNING root] server not running: no certificate configured
Nov 23 04:54:00 localhost ceph-mgr[287623]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 23 04:54:00 localhost ceph-mgr[287623]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 23 04:54:00 localhost ceph-mgr[287623]: mgr load Constructed class from module: volumes
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.802+0000 7f6ec4ffa640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.802+0000 7f6ec4ffa640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.802+0000 7f6ec4ffa640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.802+0000 7f6ec4ffa640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.802+0000 7f6ec4ffa640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 23 04:54:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} v 0)
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] TaskHandler: starting
Nov 23 04:54:00 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.811+0000 7f6ebfff0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.811+0000 7f6ebfff0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.811+0000 7f6ebfff0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.811+0000 7f6ebfff0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:54:00.811+0000 7f6ebfff0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: client.0 error registering admin socket command: (17) File exists
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 23 04:54:00 localhost ceph-mgr[287623]: [rbd_support INFO root] setup complete
Nov 23 04:54:00 localhost sshd[303948]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:54:01 localhost systemd-logind[761]: New session 70 of user ceph-admin.
Nov 23 04:54:01 localhost systemd[1]: Started Session 70 of User ceph-admin.
Nov 23 04:54:01 localhost systemd[1]: var-lib-containers-storage-overlay-7ad4518ecc1c3dc4a5294302f9078df9d584d0e2dc3d6bc264d138278b988875-merged.mount: Deactivated successfully.
Nov 23 04:54:01 localhost ceph-mon[302802]: Reconfiguring crash.np0005532586 (monmap changed)...
Nov 23 04:54:01 localhost ceph-mon[302802]: Reconfiguring daemon crash.np0005532586 on np0005532586.localdomain
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.17319 ' entity='mgr.np0005532585.gzafiw' 
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.17319 172.18.0.107:0/1708578753' entity='mgr.np0005532585.gzafiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: Activating manager daemon np0005532586.thmvqb
Nov 23 04:54:01 localhost ceph-mon[302802]: from='client.? 172.18.0.200:0/3957521171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:54:01 localhost ceph-mon[302802]: Manager daemon np0005532586.thmvqb is now available
Nov 23 04:54:01 localhost ceph-mon[302802]: removing stray HostCache host record np0005532583.localdomain.devices.0
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005532583.localdomain.devices.0"}]': finished
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/mirror_snapshot_schedule"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch
Nov 23 04:54:01 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532586.thmvqb/trash_purge_schedule"} : dispatch
Nov 23 04:54:01 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:01 localhost ceph-mgr[287623]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:54:01] ENGINE Bus STARTING
Nov 23 04:54:01 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:54:01] ENGINE Bus STARTING
Nov 23 04:54:02 localhost ceph-mgr[287623]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:54:02] ENGINE Serving on http://172.18.0.108:8765
Nov 23 04:54:02 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:54:02] ENGINE Serving on http://172.18.0.108:8765
Nov 23 04:54:02 localhost systemd[1]: tmp-crun.rTR0bd.mount: Deactivated successfully.
Nov 23 04:54:02 localhost podman[304064]: 2025-11-23 09:54:02.074772039 +0000 UTC m=+0.107305019 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55)
Nov 23 04:54:02 localhost ceph-mgr[287623]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:54:02] ENGINE Serving on https://172.18.0.108:7150
Nov 23 04:54:02 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:54:02] ENGINE Serving on https://172.18.0.108:7150
Nov 23 04:54:02 localhost ceph-mgr[287623]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:54:02] ENGINE Bus STARTED
Nov 23 04:54:02 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:54:02] ENGINE Bus STARTED
Nov 23 04:54:02 localhost ceph-mgr[287623]: [cephadm INFO cherrypy.error] [23/Nov/2025:09:54:02] ENGINE Client ('172.18.0.108', 35224) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:54:02 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : [23/Nov/2025:09:54:02] ENGINE Client ('172.18.0.108', 35224) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:54:02 localhost podman[304064]: 2025-11-23 09:54:02.199056263 +0000 UTC m=+0.231589223 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 04:54:02 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:54:02 localhost ceph-mgr[287623]: [devicehealth INFO root] Check health
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:03 localhost ceph-mon[302802]: [23/Nov/2025:09:54:01] ENGINE Bus STARTING
Nov 23 04:54:03 localhost ceph-mon[302802]: [23/Nov/2025:09:54:02] ENGINE Serving on http://172.18.0.108:8765
Nov 23 04:54:03 localhost ceph-mon[302802]: [23/Nov/2025:09:54:02] ENGINE Serving on https://172.18.0.108:7150
Nov 23 04:54:03 localhost ceph-mon[302802]: [23/Nov/2025:09:54:02] ENGINE Bus STARTED
Nov 23 04:54:03 localhost ceph-mon[302802]: [23/Nov/2025:09:54:02] ENGINE Client ('172.18.0.108', 35224) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:03 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO root] Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO root] Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO root] Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:54:04 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:54:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:54:04 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:04 localhost podman[304353]: 2025-11-23 09:54:04.663950786 +0000 UTC m=+0.098211785 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:54:04 localhost systemd[1]: tmp-crun.ZMT3bG.mount: Deactivated successfully.
Nov 23 04:54:04 localhost podman[304352]: 2025-11-23 09:54:04.719495436 +0000 UTC m=+0.155003468 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 04:54:04 localhost podman[304353]: 2025-11-23 09:54:04.727892152 +0000 UTC m=+0.162153151 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:54:04 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:54:04 localhost podman[304352]: 2025-11-23 09:54:04.753065337 +0000 UTC m=+0.188573339 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 04:54:04 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:54:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1019779150 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:04 localhost podman[304354]: 2025-11-23 09:54:04.821318438 +0000 UTC m=+0.252531475 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 04:54:04 localhost podman[304354]: 2025-11-23 09:54:04.890419561 +0000 UTC m=+0.321632568 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:54:04 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:54:04 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:05 localhost nova_compute[281613]: 2025-11-23 09:54:05.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:05 localhost nova_compute[281613]: 2025-11-23 09:54:05.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: mgr.server handle_open ignoring open from mgr.np0005532585.gzafiw 172.18.0.107:0/2625181525; not ready for session (expect reconnect)
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:05 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:54:05 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:54:05 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:05 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:54:05 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:54:05 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:54:05 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:05 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:05 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:54:05 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:05 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:06 localhost nova_compute[281613]: 2025-11-23 09:54:06.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:06 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005532585.gzafiw", "id": "np0005532585.gzafiw"} v 0)
Nov 23 04:54:06 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "mgr metadata", "who": "np0005532585.gzafiw", "id": "np0005532585.gzafiw"} : dispatch
Nov 23 04:54:06 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:06 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:06 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:06 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:54:06 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:06 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.037 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.065 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.066 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.066 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.066 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.067 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:07 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 0 B/s wr, 20 op/s
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:54:07 localhost ceph-mgr[287623]: [progress INFO root] update: starting ev b6077fcf-9554-406d-bdd5-592b26768cf3 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:07 localhost ceph-mgr[287623]: [progress INFO root] complete: finished ev b6077fcf-9554-406d-bdd5-592b26768cf3 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:07 localhost ceph-mgr[287623]: [progress INFO root] Completed event b6077fcf-9554-406d-bdd5-592b26768cf3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.622 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:54:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:54:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:07 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:07 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:54:07 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:54:07 localhost podman[305079]: 2025-11-23 09:54:07.801871643 +0000 UTC m=+0.123040321 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.855 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.858 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11964MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.858 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.859 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:54:07 localhost podman[305079]: 2025-11-23 09:54:07.901474315 +0000 UTC m=+0.222642933 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 04:54:07 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.929 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.930 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:54:07 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:54:07 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:07 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:07 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:07 localhost ceph-mon[302802]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Nov 23 04:54:07 localhost ceph-mon[302802]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Nov 23 04:54:07 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Nov 23 04:54:07 localhost nova_compute[281613]: 2025-11-23 09:54:07.950 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2776620448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:54:08 localhost nova_compute[281613]: 2025-11-23 09:54:08.395 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:54:08 localhost nova_compute[281613]: 2025-11-23 09:54:08.401 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:54:08 localhost nova_compute[281613]: 2025-11-23 09:54:08.426 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:54:08 localhost nova_compute[281613]: 2025-11-23 09:54:08.429 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:54:08 localhost nova_compute[281613]: 2025-11-23 09:54:08.429 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:54:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:08 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:54:08 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:54:08 localhost ceph-mon[302802]: Reconfiguring daemon osd.2 on np0005532584.localdomain
Nov 23 04:54:08 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:08 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:08 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:08 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Nov 23 04:54:08 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:54:09.258 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:54:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:54:09.259 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:54:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:54:09.259 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:54:09 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 0 B/s wr, 15 op/s
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020050365 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:09 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 23 04:54:09 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:54:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:09 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:09 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:54:09 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:54:09 localhost ceph-mon[302802]: Reconfiguring daemon osd.5 on np0005532584.localdomain
Nov 23 04:54:09 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:09 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:09 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:09 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:09 localhost ceph-mon[302802]: Reconfiguring osd.1 (monmap changed)...
Nov 23 04:54:09 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Nov 23 04:54:09 localhost ceph-mon[302802]: Reconfiguring daemon osd.1 on np0005532586.localdomain
Nov 23 04:54:10 localhost podman[305173]: 
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.365043803 +0000 UTC m=+0.079730239 container create 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True)
Nov 23 04:54:10 localhost systemd[1]: Started libpod-conmon-698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510.scope.
Nov 23 04:54:10 localhost nova_compute[281613]: 2025-11-23 09:54:10.410 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:10 localhost nova_compute[281613]: 2025-11-23 09:54:10.411 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:10 localhost nova_compute[281613]: 2025-11-23 09:54:10.411 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:10 localhost nova_compute[281613]: 2025-11-23 09:54:10.411 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.334451084 +0000 UTC m=+0.049137560 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:54:10 localhost systemd[1]: Started libcrun container.
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.449468728 +0000 UTC m=+0.164155154 container init 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, release=553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12)
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.461774188 +0000 UTC m=+0.176460614 container start 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public)
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.462002894 +0000 UTC m=+0.176689330 container attach 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, ceph=True, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:54:10 localhost dazzling_golick[305188]: 167 167
Nov 23 04:54:10 localhost systemd[1]: libpod-698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510.scope: Deactivated successfully.
Nov 23 04:54:10 localhost podman[305173]: 2025-11-23 09:54:10.466087394 +0000 UTC m=+0.180773890 container died 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, name=rhceph, version=7, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=)
Nov 23 04:54:10 localhost podman[305193]: 2025-11-23 09:54:10.5643496 +0000 UTC m=+0.087051626 container remove 698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_golick, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Nov 23 04:54:10 localhost systemd[1]: libpod-conmon-698ef3ae949774be192bd4c0052ca0c8e472c42606741840aba29da0113c8510.scope: Deactivated successfully.
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:10 localhost ceph-mgr[287623]: [progress INFO root] Writing back 50 completed events
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:10 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Nov 23 04:54:10 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Nov 23 04:54:10 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:54:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:10 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:54:10 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:54:11 localhost podman[240144]: time="2025-11-23T09:54:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:54:11 localhost podman[240144]: @ - - [23/Nov/2025:09:54:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:54:11 localhost podman[240144]: @ - - [23/Nov/2025:09:54:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19182 "" "Go-http-client/1.1"
Nov 23 04:54:11 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Nov 23 04:54:11 localhost systemd[1]: var-lib-containers-storage-overlay-43c77187c22a74c08392666dfd6bb153d4d777724653f45a69ed41b07f436713-merged.mount: Deactivated successfully.
Nov 23 04:54:11 localhost podman[305271]: 
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.485861446 +0000 UTC m=+0.074030027 container create f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., ceph=True)
Nov 23 04:54:11 localhost systemd[1]: Started libpod-conmon-f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8.scope.
Nov 23 04:54:11 localhost systemd[1]: Started libcrun container.
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.455263996 +0000 UTC m=+0.043432607 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.556254674 +0000 UTC m=+0.144423275 container init f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.570847906 +0000 UTC m=+0.159016497 container start f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.571160244 +0000 UTC m=+0.159328825 container attach f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, RELEASE=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True)
Nov 23 04:54:11 localhost musing_mayer[305286]: 167 167
Nov 23 04:54:11 localhost systemd[1]: libpod-f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8.scope: Deactivated successfully.
Nov 23 04:54:11 localhost podman[305271]: 2025-11-23 09:54:11.575731057 +0000 UTC m=+0.163899668 container died f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553)
Nov 23 04:54:11 localhost podman[305291]: 2025-11-23 09:54:11.669366068 +0000 UTC m=+0.078945638 container remove f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_mayer, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements)
Nov 23 04:54:11 localhost systemd[1]: libpod-conmon-f407142ecee7f207f7e8e5dd0de405fd1e6e0181f5886cd5d588da6b27913ca8.scope: Deactivated successfully.
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:11 localhost ceph-mon[302802]: Reconfiguring osd.4 (monmap changed)...
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Nov 23 04:54:11 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:11 localhost ceph-mon[302802]: Reconfiguring daemon osd.4 on np0005532586.localdomain
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:11 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:54:11 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 23 04:54:11 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:54:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:11 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:11 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:54:11 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:54:12 localhost systemd[1]: tmp-crun.QZ2A6G.mount: Deactivated successfully.
Nov 23 04:54:12 localhost systemd[1]: var-lib-containers-storage-overlay-01e892d234419f584c7a75c035cc0e6cd396c491ca4d810c8e63fe88d703820e-merged.mount: Deactivated successfully.
Nov 23 04:54:12 localhost ceph-mgr[287623]: log_channel(audit) log [DBG] : from='client.64151 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 04:54:12 localhost podman[305367]: 
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.535578492 +0000 UTC m=+0.084251231 container create a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container)
Nov 23 04:54:12 localhost systemd[1]: Started libpod-conmon-a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f.scope.
Nov 23 04:54:12 localhost systemd[1]: Started libcrun container.
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.502375311 +0000 UTC m=+0.051048090 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.606838363 +0000 UTC m=+0.155511112 container init a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.617632613 +0000 UTC m=+0.166305352 container start a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, release=553, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3)
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.617871999 +0000 UTC m=+0.166544808 container attach a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 23 04:54:12 localhost eloquent_jang[305382]: 167 167
Nov 23 04:54:12 localhost systemd[1]: libpod-a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f.scope: Deactivated successfully.
Nov 23 04:54:12 localhost podman[305367]: 2025-11-23 09:54:12.622953065 +0000 UTC m=+0.171625814 container died a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, release=553, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., RELEASE=main)
Nov 23 04:54:12 localhost podman[305387]: 2025-11-23 09:54:12.714414868 +0000 UTC m=+0.078563668 container remove a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_jang, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:54:12 localhost systemd[1]: libpod-conmon-a99f4f015c50a00fcc772997691c00bc2ece083ea6da1e424191c98f5b4c719f.scope: Deactivated successfully.
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:12 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:54:12 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:54:12 localhost ceph-mgr[287623]: [progress INFO root] update: starting ev c6324e71-616d-4af5-be82-0be35d72c933 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:12 localhost ceph-mgr[287623]: [progress INFO root] complete: finished ev c6324e71-616d-4af5-be82-0be35d72c933 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:12 localhost ceph-mgr[287623]: [progress INFO root] Completed event c6324e71-616d-4af5-be82-0be35d72c933 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Nov 23 04:54:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:54:12 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: Reconfiguring mds.mds.np0005532586.mfohsb (monmap changed)...
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: Reconfiguring daemon mds.mds.np0005532586.mfohsb on np0005532586.localdomain
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005532586.mfohsb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:12 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:13 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Nov 23 04:54:13 localhost systemd[1]: var-lib-containers-storage-overlay-82bc7adff505b4bf41e66126cb7dc0ad2631b54d58cd16ded35bece63d3c2a8f-merged.mount: Deactivated successfully.
Nov 23 04:54:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054660 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:15 localhost ceph-mgr[287623]: log_channel(audit) log [DBG] : from='client.54220 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 23 04:54:15 localhost ceph-mgr[287623]: [cephadm INFO root] Saving service mon spec with placement label:mon
Nov 23 04:54:15 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 23 04:54:15 localhost ceph-mgr[287623]: [progress INFO root] update: starting ev 5c8f0902-7f26-48d5-a8ff-1f0a75255872 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:15 localhost ceph-mgr[287623]: [progress INFO root] complete: finished ev 5c8f0902-7f26-48d5-a8ff-1f0a75255872 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:54:15 localhost ceph-mgr[287623]: [progress INFO root] Completed event 5c8f0902-7f26-48d5-a8ff-1f0a75255872 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:54:15 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Nov 23 04:54:15 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:54:15 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:15 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:54:15 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:54:15 localhost ceph-mgr[287623]: [progress INFO root] Writing back 50 completed events
Nov 23 04:54:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:54:16 localhost ceph-mon[302802]: Saving service mon spec with placement label:mon
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:16 localhost ceph-mon[302802]: Reconfiguring mon.np0005532584 (monmap changed)...
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:16 localhost ceph-mon[302802]: Reconfiguring daemon mon.np0005532584 on np0005532584.localdomain
Nov 23 04:54:16 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain.devices.0}] v 0)
Nov 23 04:54:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532584.localdomain}] v 0)
Nov 23 04:54:16 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:54:16 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:54:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:54:16 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:54:16 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:54:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:16 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:16 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:54:16 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:54:16 localhost ceph-mgr[287623]: log_channel(audit) log [DBG] : from='client.54223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005532586", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 04:54:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain.devices.0}] v 0)
Nov 23 04:54:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532585.localdomain}] v 0)
Nov 23 04:54:17 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:54:17 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:54:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 23 04:54:17 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 23 04:54:17 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Nov 23 04:54:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:54:17 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:54:17 localhost ceph-mgr[287623]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:54:17 localhost ceph-mgr[287623]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:54:17 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:17 localhost ceph-mon[302802]: Reconfiguring mon.np0005532585 (monmap changed)...
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:17 localhost ceph-mon[302802]: Reconfiguring daemon mon.np0005532585 on np0005532585.localdomain
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:17 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Nov 23 04:54:17 localhost podman[305492]: 
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.887990876 +0000 UTC m=+0.076358829 container create de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.33.12, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Nov 23 04:54:17 localhost systemd[1]: Started libpod-conmon-de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643.scope.
Nov 23 04:54:17 localhost systemd[1]: Started libcrun container.
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.855928995 +0000 UTC m=+0.044296988 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.963213504 +0000 UTC m=+0.151581457 container init de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True)
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.973461488 +0000 UTC m=+0.161829441 container start de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, io.buildah.version=1.33.12, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, RELEASE=main, release=553)
Nov 23 04:54:17 localhost thirsty_poincare[305507]: 167 167
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.975618556 +0000 UTC m=+0.163986509 container attach de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, name=rhceph, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git)
Nov 23 04:54:17 localhost systemd[1]: libpod-de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643.scope: Deactivated successfully.
Nov 23 04:54:17 localhost podman[305492]: 2025-11-23 09:54:17.978619156 +0000 UTC m=+0.166987139 container died de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Nov 23 04:54:18 localhost podman[305512]: 2025-11-23 09:54:18.07383604 +0000 UTC m=+0.083522761 container remove de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_poincare, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12)
Nov 23 04:54:18 localhost systemd[1]: libpod-conmon-de4be867fc78d860dde40169a065809ae81b81749a9d1aa1549c48834fa1f643.scope: Deactivated successfully.
Nov 23 04:54:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain.devices.0}] v 0)
Nov 23 04:54:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005532586.localdomain}] v 0)
Nov 23 04:54:18 localhost ceph-mon[302802]: Reconfiguring mon.np0005532586 (monmap changed)...
Nov 23 04:54:18 localhost ceph-mon[302802]: Reconfiguring daemon mon.np0005532586 on np0005532586.localdomain
Nov 23 04:54:18 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:18 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:54:18 localhost systemd[1]: var-lib-containers-storage-overlay-28e6b343f2d84b31fc668ce2ce186fc6c82597e1f51e37aa85f236c9bc28d54d-merged.mount: Deactivated successfully.
Nov 23 04:54:19 localhost systemd[1]: session-68.scope: Deactivated successfully.
Nov 23 04:54:19 localhost systemd[1]: session-68.scope: Consumed 1.717s CPU time.
Nov 23 04:54:19 localhost systemd-logind[761]: Session 68 logged out. Waiting for processes to exit.
Nov 23 04:54:19 localhost systemd-logind[761]: Removed session 68.
Nov 23 04:54:19 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:21 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: ERROR   09:54:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: ERROR   09:54:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: ERROR   09:54:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: ERROR   09:54:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: ERROR   09:54:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:54:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:54:23 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:25 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:54:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:54:26 localhost podman[305529]: 2025-11-23 09:54:26.175959077 +0000 UTC m=+0.081226529 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6)
Nov 23 04:54:26 localhost podman[305529]: 2025-11-23 09:54:26.189020008 +0000 UTC m=+0.094287480 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 23 04:54:26 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:54:26 localhost podman[305530]: 2025-11-23 09:54:26.232124625 +0000 UTC m=+0.135299701 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 23 04:54:26 localhost podman[305530]: 2025-11-23 09:54:26.241460945 +0000 UTC m=+0.144636051 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:54:26 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:54:26 localhost podman[305531]: 2025-11-23 09:54:26.282717232 +0000 UTC m=+0.183964696 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:54:26 localhost podman[305531]: 2025-11-23 09:54:26.318829959 +0000 UTC m=+0.220077413 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:54:26 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.201072) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667201118, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1666, "num_deletes": 255, "total_data_size": 5704233, "memory_usage": 5979144, "flush_reason": "Manual Compaction"}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667219158, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3362442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12642, "largest_seqno": 14303, "table_properties": {"data_size": 3355375, "index_size": 3892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 18742, "raw_average_key_size": 22, "raw_value_size": 3339894, "raw_average_value_size": 4004, "num_data_blocks": 170, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891633, "oldest_key_time": 1763891633, "file_creation_time": 1763891667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 18145 microseconds, and 7785 cpu microseconds.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.219216) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3362442 bytes OK
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.219247) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.221370) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.221394) EVENT_LOG_v1 {"time_micros": 1763891667221388, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.221410) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5695784, prev total WAL file size 5696533, number of live WAL files 2.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.222842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end)
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3283KB)], [15(16MB)]
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667222935, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20587258, "oldest_snapshot_seqno": -1}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 12210 keys, 18324041 bytes, temperature: kUnknown
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667314968, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18324041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18254889, "index_size": 37563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 325817, "raw_average_key_size": 26, "raw_value_size": 18047435, "raw_average_value_size": 1478, "num_data_blocks": 1437, "num_entries": 12210, "num_filter_entries": 12210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891667, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.315370) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18324041 bytes
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.317238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.3 rd, 198.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 16.4 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(11.6) write-amplify(5.4) OK, records in: 12738, records dropped: 528 output_compression: NoCompression
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.317268) EVENT_LOG_v1 {"time_micros": 1763891667317255, "job": 6, "event": "compaction_finished", "compaction_time_micros": 92186, "compaction_time_cpu_micros": 50428, "output_level": 6, "num_output_files": 1, "total_output_size": 18324041, "num_input_records": 12738, "num_output_records": 12210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667318039, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891667320791, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.222723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.320957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.320964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.320967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.320970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:54:27.320973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:54:27 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:29 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:29 localhost systemd[1]: Stopping User Manager for UID 1003...
Nov 23 04:54:29 localhost systemd[300573]: Activating special unit Exit the Session...
Nov 23 04:54:29 localhost systemd[300573]: Stopped target Main User Target.
Nov 23 04:54:29 localhost systemd[300573]: Stopped target Basic System.
Nov 23 04:54:29 localhost systemd[300573]: Stopped target Paths.
Nov 23 04:54:29 localhost systemd[300573]: Stopped target Sockets.
Nov 23 04:54:29 localhost systemd[300573]: Stopped target Timers.
Nov 23 04:54:29 localhost systemd[300573]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 23 04:54:29 localhost systemd[300573]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 23 04:54:29 localhost systemd[300573]: Closed D-Bus User Message Bus Socket.
Nov 23 04:54:29 localhost systemd[300573]: Stopped Create User's Volatile Files and Directories.
Nov 23 04:54:29 localhost systemd[300573]: Removed slice User Application Slice.
Nov 23 04:54:29 localhost systemd[300573]: Reached target Shutdown.
Nov 23 04:54:29 localhost systemd[300573]: Finished Exit the Session.
Nov 23 04:54:29 localhost systemd[300573]: Reached target Exit the Session.
Nov 23 04:54:29 localhost systemd[1]: user@1003.service: Deactivated successfully.
Nov 23 04:54:29 localhost systemd[1]: Stopped User Manager for UID 1003.
Nov 23 04:54:29 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003...
Nov 23 04:54:29 localhost systemd[1]: run-user-1003.mount: Deactivated successfully.
Nov 23 04:54:29 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Nov 23 04:54:29 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003.
Nov 23 04:54:29 localhost systemd[1]: Removed slice User Slice of UID 1003.
Nov 23 04:54:29 localhost systemd[1]: user-1003.slice: Consumed 2.363s CPU time.
Nov 23 04:54:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:54:30 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:54:31 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:33 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:54:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:54:35 localhost podman[305587]: 2025-11-23 09:54:35.178913677 +0000 UTC m=+0.084553608 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 04:54:35 localhost systemd[1]: tmp-crun.kemWer.mount: Deactivated successfully.
Nov 23 04:54:35 localhost podman[305589]: 2025-11-23 09:54:35.269649152 +0000 UTC m=+0.169945160 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 23 04:54:35 localhost podman[305589]: 2025-11-23 09:54:35.306375087 +0000 UTC m=+0.206671105 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:54:35 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:54:35 localhost podman[305587]: 2025-11-23 09:54:35.361985129 +0000 UTC m=+0.267625040 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 04:54:35 localhost podman[305588]: 2025-11-23 09:54:35.361096614 +0000 UTC m=+0.263020126 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:54:35 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:35 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:54:35 localhost podman[305588]: 2025-11-23 09:54:35.442058176 +0000 UTC m=+0.343981718 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:54:35 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:54:37 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:54:38 localhost podman[305651]: 2025-11-23 09:54:38.178786231 +0000 UTC m=+0.087305502 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 23 04:54:38 localhost podman[305651]: 2025-11-23 09:54:38.214986962 +0000 UTC m=+0.123506213 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 23 04:54:38 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:54:39 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:41 localhost podman[240144]: time="2025-11-23T09:54:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:54:41 localhost podman[240144]: @ - - [23/Nov/2025:09:54:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:54:41 localhost podman[240144]: @ - - [23/Nov/2025:09:54:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19181 "" "Go-http-client/1.1"
Nov 23 04:54:41 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:43 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:45 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:47 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:49 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:51 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: ERROR   09:54:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: ERROR   09:54:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: ERROR   09:54:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: ERROR   09:54:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: ERROR   09:54:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:54:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:54:53 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:54:55 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:54:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:54:57 localhost systemd[1]: tmp-crun.94GMhh.mount: Deactivated successfully.
Nov 23 04:54:57 localhost podman[305671]: 2025-11-23 09:54:57.179345034 +0000 UTC m=+0.084163147 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:54:57 localhost podman[305671]: 2025-11-23 09:54:57.189191302 +0000 UTC m=+0.094009405 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 04:54:57 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:54:57 localhost podman[305670]: 2025-11-23 09:54:57.238854717 +0000 UTC m=+0.146199009 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Nov 23 04:54:57 localhost podman[305670]: 2025-11-23 09:54:57.250682929 +0000 UTC m=+0.158027251 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Nov 23 04:54:57 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:54:57 localhost podman[305672]: 2025-11-23 09:54:57.300992522 +0000 UTC m=+0.199784140 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:54:57 localhost podman[305672]: 2025-11-23 09:54:57.309088113 +0000 UTC m=+0.207879751 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:54:57 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:54:57 localhost sshd[305731]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:54:57 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:59 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:54:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 04:55:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1481934905' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 04:55:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 04:55:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1481934905' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 04:55:00 localhost ceph-mgr[287623]: [balancer INFO root] Optimize plan auto_2025-11-23_09:55:00
Nov 23 04:55:00 localhost ceph-mgr[287623]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 23 04:55:00 localhost ceph-mgr[287623]: [balancer INFO root] do_upmap
Nov 23 04:55:00 localhost ceph-mgr[287623]: [balancer INFO root] pools ['.mgr', 'manila_data', 'vms', 'backups', 'manila_metadata', 'images', 'volumes']
Nov 23 04:55:00 localhost ceph-mgr[287623]: [balancer INFO root] prepared 0/10 changes
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] _maybe_adjust
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Nov 23 04:55:00 localhost ceph-mgr[287623]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16)
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] scanning for idle connections..
Nov 23 04:55:00 localhost ceph-mgr[287623]: [volumes INFO mgr_util] cleaning up connections: []
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: log_channel(audit) log [DBG] : from='client.54247 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 23 04:55:00 localhost ceph-mgr[287623]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 23 04:55:01 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.238104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702238437, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 637, "num_deletes": 256, "total_data_size": 552338, "memory_usage": 564408, "flush_reason": "Manual Compaction"}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702243902, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 353856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14308, "largest_seqno": 14940, "table_properties": {"data_size": 350917, "index_size": 922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6845, "raw_average_key_size": 18, "raw_value_size": 344937, "raw_average_value_size": 927, "num_data_blocks": 41, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891667, "oldest_key_time": 1763891667, "file_creation_time": 1763891702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5833 microseconds, and 2152 cpu microseconds.
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.243947) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 353856 bytes OK
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.243969) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247141) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247163) EVENT_LOG_v1 {"time_micros": 1763891702247156, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 548787, prev total WAL file size 549111, number of live WAL files 2.
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.248144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373733' seq:72057594037927935, type:22 .. '6C6F676D0034303235' seq:0, type:0; will stop at (end)
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(345KB)], [18(17MB)]
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702248420, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18677897, "oldest_snapshot_seqno": -1}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 12057 keys, 18579235 bytes, temperature: kUnknown
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702335351, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 18579235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18509983, "index_size": 38047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 323579, "raw_average_key_size": 26, "raw_value_size": 18304022, "raw_average_value_size": 1518, "num_data_blocks": 1455, "num_entries": 12057, "num_filter_entries": 12057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.335826) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 18579235 bytes
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.337398) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.4 rd, 213.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 17.5 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(105.3) write-amplify(52.5) OK, records in: 12582, records dropped: 525 output_compression: NoCompression
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.337438) EVENT_LOG_v1 {"time_micros": 1763891702337420, "job": 8, "event": "compaction_finished", "compaction_time_micros": 87126, "compaction_time_cpu_micros": 52991, "output_level": 6, "num_output_files": 1, "total_output_size": 18579235, "num_input_records": 12582, "num_output_records": 12057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702337704, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891702341109, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.247759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.341149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.341155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.341158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.341161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:02.341164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:03 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:05 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:06 localhost nova_compute[281613]: 2025-11-23 09:55:06.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:55:06 localhost systemd[1]: tmp-crun.PDwqZu.mount: Deactivated successfully.
Nov 23 04:55:06 localhost podman[305733]: 2025-11-23 09:55:06.184844599 +0000 UTC m=+0.090716935 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 23 04:55:06 localhost podman[305734]: 2025-11-23 09:55:06.242025309 +0000 UTC m=+0.142209650 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:55:06 localhost podman[305734]: 2025-11-23 09:55:06.250595623 +0000 UTC m=+0.150779924 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:55:06 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:55:06 localhost podman[305735]: 2025-11-23 09:55:06.215719921 +0000 UTC m=+0.111324757 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 04:55:06 localhost podman[305735]: 2025-11-23 09:55:06.300043621 +0000 UTC m=+0.195648467 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Nov 23 04:55:06 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:55:06 localhost podman[305733]: 2025-11-23 09:55:06.319764989 +0000 UTC m=+0.225637335 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 23 04:55:06 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.043 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.044 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.044 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.044 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.045 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:55:07 localhost systemd[1]: tmp-crun.RxPS6b.mount: Deactivated successfully.
Nov 23 04:55:07 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:55:07 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1696104249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.497 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.699 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.702 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11965MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.702 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.703 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.772 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.773 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:55:07 localhost nova_compute[281613]: 2025-11-23 09:55:07.790 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:55:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:55:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1663917042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:55:08 localhost nova_compute[281613]: 2025-11-23 09:55:08.346 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:55:08 localhost nova_compute[281613]: 2025-11-23 09:55:08.354 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:55:08 localhost nova_compute[281613]: 2025-11-23 09:55:08.373 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:55:08 localhost nova_compute[281613]: 2025-11-23 09:55:08.375 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:55:08 localhost nova_compute[281613]: 2025-11-23 09:55:08.376 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:55:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:55:09 localhost podman[305839]: 2025-11-23 09:55:09.177001836 +0000 UTC m=+0.082509961 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:55:09 localhost podman[305839]: 2025-11-23 09:55:09.216038811 +0000 UTC m=+0.121546986 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 04:55:09 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:55:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:55:09.261 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:55:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:55:09.261 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:55:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:55:09.261 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.376 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.377 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.377 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:55:09 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.393 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.394 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:09 localhost nova_compute[281613]: 2025-11-23 09:55:09.394 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:10 localhost nova_compute[281613]: 2025-11-23 09:55:10.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:10 localhost nova_compute[281613]: 2025-11-23 09:55:10.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:55:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:55:11 localhost podman[240144]: time="2025-11-23T09:55:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:55:11 localhost podman[240144]: @ - - [23/Nov/2025:09:55:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:55:11 localhost podman[240144]: @ - - [23/Nov/2025:09:55:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19187 "" "Go-http-client/1.1"
Nov 23 04:55:11 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:13 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:13 localhost ceph-mgr[287623]: log_channel(audit) log [DBG] : from='client.64199 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 23 04:55:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:15 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:17 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 04:55:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 04:55:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 23 04:55:19 localhost ceph-mon[302802]: log_channel(audit) log [INF] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:55:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 23 04:55:19 localhost ceph-mgr[287623]: [progress INFO root] update: starting ev 4047ee2e-ccd1-4bea-b827-1190827bbe29 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:55:19 localhost ceph-mgr[287623]: [progress INFO root] complete: finished ev 4047ee2e-ccd1-4bea-b827-1190827bbe29 (Updating node-proxy deployment (+3 -> 3))
Nov 23 04:55:19 localhost ceph-mgr[287623]: [progress INFO root] Completed event 4047ee2e-ccd1-4bea-b827-1190827bbe29 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Nov 23 04:55:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 23 04:55:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Nov 23 04:55:19 localhost ceph-mon[302802]: from='mgr.26494 172.18.0.108:0/345283227' entity='mgr.np0005532586.thmvqb' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:55:19 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:55:19 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:20 localhost ceph-mgr[287623]: [progress INFO root] Writing back 50 completed events
Nov 23 04:55:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 23 04:55:21 localhost ceph-mgr[287623]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail
Nov 23 04:55:21 localhost ceph-mon[302802]: from='mgr.26494 ' entity='mgr.np0005532586.thmvqb' 
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: ERROR   09:55:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: ERROR   09:55:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: ERROR   09:55:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: ERROR   09:55:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: ERROR   09:55:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:55:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:55:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 e87: 6 total, 6 up, 6 in
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr handle_mgr_map I was active but no longer am
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:23.225+0000 7f6f4ca20640 -1 mgr handle_mgr_map I was active but no longer am
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  1: '-n'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  2: 'mgr.np0005532586.thmvqb'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  3: '-f'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  4: '--setuser'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  5: 'ceph'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  6: '--setgroup'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  7: 'ceph'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  8: '--default-log-to-file=false'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  9: '--default-log-to-journald=true'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr respawn  exe_path /proc/self/exe
Nov 23 04:55:23 localhost systemd[1]: session-70.scope: Deactivated successfully.
Nov 23 04:55:23 localhost systemd[1]: session-70.scope: Consumed 10.363s CPU time.
Nov 23 04:55:23 localhost systemd-logind[761]: Session 70 logged out. Waiting for processes to exit.
Nov 23 04:55:23 localhost systemd-logind[761]: Removed session 70.
Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: ignoring --setuser ceph since I am not root
Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: ignoring --setgroup ceph since I am not root
Nov 23 04:55:23 localhost ceph-mgr[287623]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Nov 23 04:55:23 localhost ceph-mgr[287623]: pidfile_write: ignore empty --pid-file
Nov 23 04:55:23 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr[py] Loading python module 'alerts'
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr[py] Loading python module 'balancer'
Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:23.434+0000 7fe4fe40c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 23 04:55:23 localhost sshd[305965]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 04:55:23 localhost ceph-mgr[287623]: mgr[py] Loading python module 'cephadm'
Nov 23 04:55:23 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:23.499+0000 7fe4fe40c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 23 04:55:23 localhost systemd-logind[761]: New session 71 of user ceph-admin.
Nov 23 04:55:23 localhost systemd[1]: Started Session 71 of User ceph-admin.
Nov 23 04:55:23 localhost ceph-mon[302802]: from='client.? 172.18.0.200:0/1136241170' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:55:23 localhost ceph-mon[302802]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Nov 23 04:55:23 localhost ceph-mon[302802]: Activating manager daemon np0005532584.naxwxy
Nov 23 04:55:23 localhost ceph-mon[302802]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Nov 23 04:55:23 localhost ceph-mon[302802]: Manager daemon np0005532584.naxwxy is now available
Nov 23 04:55:23 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/mirror_snapshot_schedule"} : dispatch
Nov 23 04:55:23 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005532584.naxwxy/trash_purge_schedule"} : dispatch
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'crash'
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'dashboard'
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:24.125+0000 7fe4fe40c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost podman[306079]: 2025-11-23 09:55:24.616507755 +0000 UTC m=+0.088997889 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553)
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'devicehealth'
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'diskprediction_local'
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:24.681+0000 7fe4fe40c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost podman[306079]: 2025-11-23 09:55:24.747978671 +0000 UTC m=+0.220468805 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph)
Nov 23 04:55:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]:  from numpy import show_config as show_numpy_config
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'influx'
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:24.817+0000 7fe4fe40c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'insights'
Nov 23 04:55:24 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:24.878+0000 7fe4fe40c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 23 04:55:24 localhost ceph-mgr[287623]: mgr[py] Loading python module 'iostat'
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'k8sevents'
Nov 23 04:55:25 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:25.006+0000 7fe4fe40c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'localpool'
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'mds_autoscaler'
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'mirroring'
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'nfs'
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'orchestrator'
Nov 23 04:55:25 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:25.744+0000 7fe4fe40c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'osd_perf_query'
Nov 23 04:55:25 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:25.887+0000 7fe4fe40c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:25.954+0000 7fe4fe40c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 23 04:55:25 localhost ceph-mgr[287623]: mgr[py] Loading python module 'osd_support'
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'pg_autoscaler'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.014+0000 7fe4fe40c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'progress'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.080+0000 7fe4fe40c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'prometheus'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.141+0000 7fe4fe40c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mon[302802]: [23/Nov/2025:09:55:25] ENGINE Bus STARTING
Nov 23 04:55:26 localhost ceph-mon[302802]: [23/Nov/2025:09:55:25] ENGINE Serving on https://172.18.0.106:7150
Nov 23 04:55:26 localhost ceph-mon[302802]: [23/Nov/2025:09:55:25] ENGINE Client ('172.18.0.106', 60482) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 23 04:55:26 localhost ceph-mon[302802]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Nov 23 04:55:26 localhost ceph-mon[302802]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Nov 23 04:55:26 localhost ceph-mon[302802]: Cluster is now healthy
Nov 23 04:55:26 localhost ceph-mon[302802]: [23/Nov/2025:09:55:25] ENGINE Serving on http://172.18.0.106:8765
Nov 23 04:55:26 localhost ceph-mon[302802]: [23/Nov/2025:09:55:25] ENGINE Bus STARTED
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rbd_support'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.444+0000 7fe4fe40c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'restful'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.530+0000 7fe4fe40c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rgw'
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 04:55:26 localhost ceph-mgr[287623]: mgr[py] Loading python module 'rook'
Nov 23 04:55:26 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:26.866+0000 7fe4fe40c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.269804) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727269845, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 767, "num_deletes": 258, "total_data_size": 2031449, "memory_usage": 2157792, "flush_reason": "Manual Compaction"}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727280113, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1315966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14945, "largest_seqno": 15707, "table_properties": {"data_size": 1312190, "index_size": 1503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8763, "raw_average_key_size": 19, "raw_value_size": 1304214, "raw_average_value_size": 2829, "num_data_blocks": 62, "num_entries": 461, "num_filter_entries": 461, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891702, "oldest_key_time": 1763891702, "file_creation_time": 1763891727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10373 microseconds, and 4780 cpu microseconds.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.280175) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1315966 bytes OK
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.280201) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.281966) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.281992) EVENT_LOG_v1 {"time_micros": 1763891727281984, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.282011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2027190, prev total WAL file size 2027190, number of live WAL files 2.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.282893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353237' seq:72057594037927935, type:22 .. '6B760031373835' seq:0, type:0; will stop at (end)
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1285KB)], [21(17MB)]
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727282940, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19895201, "oldest_snapshot_seqno": -1}
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.316+0000 7fe4fe40c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'selftest'
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11971 keys, 18720625 bytes, temperature: kUnknown
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727362482, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 18720625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18652577, "index_size": 37035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29957, "raw_key_size": 323457, "raw_average_key_size": 27, "raw_value_size": 18448573, "raw_average_value_size": 1541, "num_data_blocks": 1396, "num_entries": 11971, "num_filter_entries": 11971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.362837) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 18720625 bytes
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.364757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.7 rd, 234.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 17.7 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(29.3) write-amplify(14.2) OK, records in: 12518, records dropped: 547 output_compression: NoCompression
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.364789) EVENT_LOG_v1 {"time_micros": 1763891727364775, "job": 10, "event": "compaction_finished", "compaction_time_micros": 79686, "compaction_time_cpu_micros": 43461, "output_level": 6, "num_output_files": 1, "total_output_size": 18720625, "num_input_records": 12518, "num_output_records": 11971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727365096, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 23 04:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891727367887, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.282843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.367923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.367928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.367931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.367934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:27.367937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:55:27 localhost podman[306411]: 2025-11-23 09:55:27.373113627 +0000 UTC m=+0.124130417 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'snap_schedule'
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.380+0000 7fe4fe40c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost podman[306411]: 2025-11-23 09:55:27.388518777 +0000 UTC m=+0.139535547 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 04:55:27 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'stats'
Nov 23 04:55:27 localhost podman[306461]: 2025-11-23 09:55:27.464707655 +0000 UTC m=+0.087515088 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9)
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 04:55:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:55:27 localhost podman[306461]: 2025-11-23 09:55:27.479449218 +0000 UTC m=+0.102256651 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Nov 23 04:55:27 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'status'
Nov 23 04:55:27 localhost podman[306462]: 2025-11-23 09:55:27.526189942 +0000 UTC m=+0.142028284 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:55:27 localhost podman[306462]: 2025-11-23 09:55:27.572301861 +0000 UTC m=+0.188140203 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:55:27 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.583+0000 7fe4fe40c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'telegraf'
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'telemetry'
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.644+0000 7fe4fe40c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.778+0000 7fe4fe40c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'test_orchestrator'
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:27.931+0000 7fe4fe40c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 23 04:55:27 localhost ceph-mgr[287623]: mgr[py] Loading python module 'volumes'
Nov 23 04:55:28 localhost ceph-mgr[287623]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 04:55:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:28.119+0000 7fe4fe40c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 23 04:55:28 localhost ceph-mgr[287623]: mgr[py] Loading python module 'zabbix'
Nov 23 04:55:28 localhost ceph-mgr[287623]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 04:55:28 localhost ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-mgr-np0005532586-thmvqb[287596]: 2025-11-23T09:55:28.178+0000 7fe4fe40c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 23 04:55:28 localhost ceph-mgr[287623]: ms_deliver_dispatch: unhandled message 0x558428259600 mon_map magic: 0 from mon.2 v2:172.18.0.105:3300/0
Nov 23 04:55:28 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.106:6810/2037590349
Nov 23 04:55:28 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 04:55:28 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:55:28 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 04:55:28 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 04:55:28 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 04:55:28 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.conf
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.conf
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.conf
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:55:28 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.conf
Nov 23 04:55:29 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:55:29 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:55:29 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/etc/ceph/ceph.client.admin.keyring
Nov 23 04:55:29 localhost ceph-mon[302802]: Updating np0005532584.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:55:29 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:29 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.678176) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729678314, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 374, "num_deletes": 251, "total_data_size": 643977, "memory_usage": 651944, "flush_reason": "Manual Compaction"}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729682989, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 427045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15712, "largest_seqno": 16081, "table_properties": {"data_size": 424623, "index_size": 533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6885, "raw_average_key_size": 21, "raw_value_size": 419482, "raw_average_value_size": 1282, "num_data_blocks": 20, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891727, "oldest_key_time": 1763891727, "file_creation_time": 1763891729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4828 microseconds, and 2032 cpu microseconds.
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.683039) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 427045 bytes OK
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.683066) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.684716) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.684738) EVENT_LOG_v1 {"time_micros": 1763891729684731, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.684766) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 641384, prev total WAL file size 641384, number of live WAL files 2.
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.687810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end)
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(417KB)], [24(17MB)]
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729687866, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19147670, "oldest_snapshot_seqno": -1}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11779 keys, 16438941 bytes, temperature: kUnknown
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729765500, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16438941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16374306, "index_size": 34070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29509, "raw_key_size": 320136, "raw_average_key_size": 27, "raw_value_size": 16175705, "raw_average_value_size": 1373, "num_data_blocks": 1268, "num_entries": 11779, "num_filter_entries": 11779, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.766266) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16438941 bytes
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.768099) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.0 rd, 210.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 17.9 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(83.3) write-amplify(38.5) OK, records in: 12298, records dropped: 519 output_compression: NoCompression
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.768131) EVENT_LOG_v1 {"time_micros": 1763891729768116, "job": 12, "event": "compaction_finished", "compaction_time_micros": 78156, "compaction_time_cpu_micros": 44298, "output_level": 6, "num_output_files": 1, "total_output_size": 16438941, "num_input_records": 12298, "num_output_records": 11779, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729768608, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891729771993, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.687750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.772088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.772097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.772101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.772104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:55:29.772109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:55:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:30 localhost ceph-mon[302802]: Updating np0005532585.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:55:30 localhost ceph-mon[302802]: Updating np0005532586.localdomain:/var/lib/ceph/46550e70-79cb-5f55-bf6d-1204b97e083b/config/ceph.client.admin.keyring
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:55:30 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:55:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:55:37 localhost podman[307058]: 2025-11-23 09:55:37.178380908 +0000 UTC m=+0.087478737 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:55:37 localhost podman[307058]: 2025-11-23 09:55:37.188386262 +0000 UTC m=+0.097484121 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, tcib_managed=true)
Nov 23 04:55:37 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:55:37 localhost systemd[1]: tmp-crun.M8JkDx.mount: Deactivated successfully.
Nov 23 04:55:37 localhost podman[307060]: 2025-11-23 09:55:37.249443636 +0000 UTC m=+0.151694839 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:55:37 localhost podman[307060]: 2025-11-23 09:55:37.286929529 +0000 UTC m=+0.189180752 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 23 04:55:37 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:55:37 localhost podman[307059]: 2025-11-23 09:55:37.33463054 +0000 UTC m=+0.239260047 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:55:37 localhost podman[307059]: 2025-11-23 09:55:37.348003104 +0000 UTC m=+0.252632641 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:55:37 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:55:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:55:40 localhost podman[307122]: 2025-11-23 09:55:40.169098716 +0000 UTC m=+0.076940210 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 23 04:55:40 localhost podman[307122]: 2025-11-23 09:55:40.185140333 +0000 UTC m=+0.092981837 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 23 04:55:40 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:55:41 localhost podman[240144]: time="2025-11-23T09:55:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:55:41 localhost podman[240144]: @ - - [23/Nov/2025:09:55:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:55:41 localhost podman[240144]: @ - - [23/Nov/2025:09:55:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19189 "" "Go-http-client/1.1"
Nov 23 04:55:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: ERROR   09:55:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: ERROR   09:55:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: ERROR   09:55:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: ERROR   09:55:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: ERROR   09:55:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:55:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:55:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:55:58 localhost podman[307142]: 2025-11-23 09:55:58.179729326 +0000 UTC m=+0.080117917 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:55:58 localhost podman[307142]: 2025-11-23 09:55:58.217122556 +0000 UTC m=+0.117511157 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:55:58 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:55:58 localhost podman[307141]: 2025-11-23 09:55:58.274227143 +0000 UTC m=+0.176724781 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, vendor=Red Hat, Inc.)
Nov 23 04:55:58 localhost systemd[1]: tmp-crun.Yoc4Ia.mount: Deactivated successfully.
Nov 23 04:55:58 localhost podman[307143]: 2025-11-23 09:55:58.296372317 +0000 UTC m=+0.192032168 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:55:58 localhost podman[307141]: 2025-11-23 09:55:58.321120823 +0000 UTC m=+0.223618481 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, release=1755695350)
Nov 23 04:55:58 localhost podman[307143]: 2025-11-23 09:55:58.329647615 +0000 UTC m=+0.225307456 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:55:58 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:55:58 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:55:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:05 localhost sshd[307203]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:56:07 localhost nova_compute[281613]: 2025-11-23 09:56:07.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:07 localhost nova_compute[281613]: 2025-11-23 09:56:07.021 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:56:08 localhost nova_compute[281613]: 2025-11-23 09:56:08.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:56:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:56:08 localhost podman[307205]: 2025-11-23 09:56:08.172366629 +0000 UTC m=+0.080977089 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:56:08 localhost podman[307205]: 2025-11-23 09:56:08.205909315 +0000 UTC m=+0.114519805 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 04:56:08 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:56:08 localhost podman[307206]: 2025-11-23 09:56:08.22592407 +0000 UTC m=+0.132226717 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:56:08 localhost podman[307206]: 2025-11-23 09:56:08.238933765 +0000 UTC m=+0.145236452 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:56:08 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:56:08 localhost podman[307207]: 2025-11-23 09:56:08.328145928 +0000 UTC m=+0.231230108 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 04:56:08 localhost podman[307207]: 2025-11-23 09:56:08.369899057 +0000 UTC m=+0.272983277 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 04:56:08 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.037 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.038 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.038 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.038 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.039 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:56:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:09.262 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:56:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:09.263 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:56:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:09.263 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:56:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:56:09 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3355387136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.465 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.720 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.722 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12010MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.722 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.722 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:56:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.797 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.798 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:56:09 localhost nova_compute[281613]: 2025-11-23 09:56:09.825 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:56:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:56:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/68488798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:56:10 localhost nova_compute[281613]: 2025-11-23 09:56:10.241 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:56:10 localhost nova_compute[281613]: 2025-11-23 09:56:10.248 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:56:10 localhost nova_compute[281613]: 2025-11-23 09:56:10.263 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:56:10 localhost nova_compute[281613]: 2025-11-23 09:56:10.265 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:56:10 localhost nova_compute[281613]: 2025-11-23 09:56:10.266 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:56:11 localhost podman[307313]: 2025-11-23 09:56:11.173505243 +0000 UTC m=+0.079234782 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:56:11 localhost podman[307313]: 2025-11-23 09:56:11.213935735 +0000 UTC m=+0.119665264 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:56:11 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.267 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.267 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.267 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.279 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.280 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:11 localhost nova_compute[281613]: 2025-11-23 09:56:11.280 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:11 localhost podman[240144]: time="2025-11-23T09:56:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:56:11 localhost podman[240144]: @ - - [23/Nov/2025:09:56:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:56:11 localhost podman[240144]: @ - - [23/Nov/2025:09:56:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1"
Nov 23 04:56:12 localhost nova_compute[281613]: 2025-11-23 09:56:12.028 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:56:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: ERROR   09:56:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: ERROR   09:56:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: ERROR   09:56:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: ERROR   09:56:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: ERROR   09:56:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:56:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:56:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:56:29 localhost podman[307332]: 2025-11-23 09:56:29.178012546 +0000 UTC m=+0.083008426 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:56:29 localhost podman[307332]: 2025-11-23 09:56:29.212005833 +0000 UTC m=+0.117001703 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:56:29 localhost systemd[1]: tmp-crun.PdHsSi.mount: Deactivated successfully.
Nov 23 04:56:29 localhost podman[307333]: 2025-11-23 09:56:29.230923659 +0000 UTC m=+0.132859104 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:56:29 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:56:29 localhost podman[307333]: 2025-11-23 09:56:29.236205583 +0000 UTC m=+0.138141028 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:56:29 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:56:29 localhost podman[307331]: 2025-11-23 09:56:29.280780519 +0000 UTC m=+0.188610436 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Nov 23 04:56:29 localhost podman[307331]: 2025-11-23 09:56:29.29178867 +0000 UTC m=+0.199618577 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350)
Nov 23 04:56:29 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:56:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:56:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:56:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:56:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:56:39 localhost podman[307536]: 2025-11-23 09:56:39.155958307 +0000 UTC m=+0.063906934 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:56:39 localhost podman[307536]: 2025-11-23 09:56:39.166979188 +0000 UTC m=+0.074927855 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:56:39 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:56:39 localhost podman[307535]: 2025-11-23 09:56:39.22316582 +0000 UTC m=+0.130421209 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:56:39 localhost podman[307535]: 2025-11-23 09:56:39.228012103 +0000 UTC m=+0.135267452 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:56:39 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:56:39 localhost podman[307537]: 2025-11-23 09:56:39.278337426 +0000 UTC m=+0.179280992 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 23 04:56:39 localhost podman[307537]: 2025-11-23 09:56:39.346048722 +0000 UTC m=+0.246992298 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:56:39 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:56:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:41 localhost podman[240144]: time="2025-11-23T09:56:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:56:41 localhost podman[240144]: @ - - [23/Nov/2025:09:56:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:56:41 localhost podman[240144]: @ - - [23/Nov/2025:09:56:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19191 "" "Go-http-client/1.1"
Nov 23 04:56:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:56:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:42.101 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:56:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:42.102 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:56:42 localhost podman[307594]: 2025-11-23 09:56:42.172682236 +0000 UTC m=+0.079766188 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 23 04:56:42 localhost podman[307594]: 2025-11-23 09:56:42.187938601 +0000 UTC m=+0.095022553 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 04:56:42 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:56:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:46 localhost ovn_metadata_agent[159423]: 2025-11-23 09:56:46.105 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:56:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e88 e88: 6 total, 6 up, 6 in
Nov 23 04:56:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 e89: 6 total, 6 up, 6 in
Nov 23 04:56:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: ERROR   09:56:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: ERROR   09:56:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: ERROR   09:56:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: ERROR   09:56:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: ERROR   09:56:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:56:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:56:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:56:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:57:00 localhost systemd[1]: tmp-crun.871atQ.mount: Deactivated successfully.
Nov 23 04:57:00 localhost podman[307613]: 2025-11-23 09:57:00.188565162 +0000 UTC m=+0.092643458 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 04:57:00 localhost podman[307613]: 2025-11-23 09:57:00.235144332 +0000 UTC m=+0.139222658 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 23 04:57:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 04:57:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2399139433' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 04:57:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 04:57:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2399139433' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 04:57:00 localhost podman[307614]: 2025-11-23 09:57:00.247627903 +0000 UTC m=+0.148651646 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:57:00 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:57:00 localhost podman[307614]: 2025-11-23 09:57:00.257661066 +0000 UTC m=+0.158684789 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 04:57:00 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:57:00 localhost podman[307615]: 2025-11-23 09:57:00.340039763 +0000 UTC m=+0.238619729 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:57:00 localhost podman[307615]: 2025-11-23 09:57:00.352953195 +0000 UTC m=+0.251533191 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:57:00 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:57:01 localhost systemd[1]: tmp-crun.ZmUDAi.mount: Deactivated successfully.
Nov 23 04:57:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:08 localhost nova_compute[281613]: 2025-11-23 09:57:08.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:08 localhost nova_compute[281613]: 2025-11-23 09:57:08.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:57:09 localhost nova_compute[281613]: 2025-11-23 09:57:09.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:09.265 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:57:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:09.266 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:57:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:09.266 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:57:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.034 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.052 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.053 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.053 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.054 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.054 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:57:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:57:10 localhost podman[307676]: 2025-11-23 09:57:10.207646533 +0000 UTC m=+0.103663399 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:57:10 localhost podman[307676]: 2025-11-23 09:57:10.220993017 +0000 UTC m=+0.117009903 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:57:10 localhost podman[307677]: 2025-11-23 09:57:10.268459422 +0000 UTC m=+0.162788141 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:57:10 localhost podman[307675]: 2025-11-23 09:57:10.309854372 +0000 UTC m=+0.213740842 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:57:10 localhost podman[307675]: 2025-11-23 09:57:10.313797679 +0000 UTC m=+0.217684189 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:57:10 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:57:10 localhost podman[307677]: 2025-11-23 09:57:10.330019521 +0000 UTC m=+0.224348270 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 04:57:10 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:57:10 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:57:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:57:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4241493939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.524 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.746 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.748 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=12003MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.748 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.749 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.821 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.821 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:57:10 localhost nova_compute[281613]: 2025-11-23 09:57:10.843 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:57:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:57:11 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1983039144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:57:11 localhost nova_compute[281613]: 2025-11-23 09:57:11.260 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:57:11 localhost nova_compute[281613]: 2025-11-23 09:57:11.267 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:57:11 localhost podman[240144]: time="2025-11-23T09:57:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:57:11 localhost nova_compute[281613]: 2025-11-23 09:57:11.290 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:57:11 localhost nova_compute[281613]: 2025-11-23 09:57:11.292 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:57:11 localhost nova_compute[281613]: 2025-11-23 09:57:11.292 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:57:11 localhost podman[240144]: @ - - [23/Nov/2025:09:57:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:57:11 localhost podman[240144]: @ - - [23/Nov/2025:09:57:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1"
Nov 23 04:57:12 localhost nova_compute[281613]: 2025-11-23 09:57:12.277 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:12 localhost nova_compute[281613]: 2025-11-23 09:57:12.291 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:12 localhost nova_compute[281613]: 2025-11-23 09:57:12.292 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:13 localhost nova_compute[281613]: 2025-11-23 09:57:13.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:13 localhost nova_compute[281613]: 2025-11-23 09:57:13.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:57:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:57:13 localhost podman[307782]: 2025-11-23 09:57:13.170198394 +0000 UTC m=+0.078197504 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:57:13 localhost podman[307782]: 2025-11-23 09:57:13.210153984 +0000 UTC m=+0.118153094 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 04:57:13 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:57:14 localhost ovn_controller[153786]: 2025-11-23T09:57:14Z|00040|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 23 04:57:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: ERROR   09:57:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: ERROR   09:57:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: ERROR   09:57:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: ERROR   09:57:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: ERROR   09:57:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:57:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:57:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:57:31 localhost podman[307803]: 2025-11-23 09:57:31.181988387 +0000 UTC m=+0.080750633 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:57:31 localhost podman[307803]: 2025-11-23 09:57:31.217982119 +0000 UTC m=+0.116744385 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:57:31 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:57:31 localhost podman[307802]: 2025-11-23 09:57:31.231894799 +0000 UTC m=+0.136132714 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:57:31 localhost podman[307802]: 2025-11-23 09:57:31.268567999 +0000 UTC m=+0.172805884 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Nov 23 04:57:31 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:57:31 localhost podman[307804]: 2025-11-23 09:57:31.342387733 +0000 UTC m=+0.237074548 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:57:31 localhost podman[307804]: 2025-11-23 09:57:31.379037083 +0000 UTC m=+0.273723848 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:57:31 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:57:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:31.698 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:31Z, description=, device_id=af12546c-762e-460a-92cf-0a6f5f6b8733, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cfedf0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cfe340>], id=b30cf160-d3eb-4eb8-8cd4-def5f3b9d9ab, ip_allocation=immediate, mac_address=fa:16:3e:76:f1:7e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=260, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:57:31Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:57:31 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:57:31 localhost podman[307880]: 2025-11-23 09:57:31.983673966 +0000 UTC m=+0.064046268 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:57:31 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:31 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:32.196 262721 INFO neutron.agent.dhcp.agent [None req-b40ec1a2-acef-4e07-b162-ce3cb235fe73 - - - - - -] DHCP configuration for ports {'b30cf160-d3eb-4eb8-8cd4-def5f3b9d9ab'} is completed#033[00m
Nov 23 04:57:33 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:57:33 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:57:33 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:33.829 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:33Z, description=, device_id=b378c727-9d77-4582-a9ca-944830efc847, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa79161f8e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa79161f280>], id=b52d7e9d-04e4-419c-a222-f5a946a335be, ip_allocation=immediate, mac_address=fa:16:3e:db:d6:17, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=271, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:57:33Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:57:34 localhost podman[308004]: 2025-11-23 09:57:34.055720605 +0000 UTC m=+0.059022321 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:57:34 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:57:34 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:34 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:34 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:34.264 262721 INFO neutron.agent.dhcp.agent [None req-ecc72458-edf3-48cc-8aa2-3f6aaef3ffef - - - - - -] DHCP configuration for ports {'b52d7e9d-04e4-419c-a222-f5a946a335be'} is completed#033[00m
Nov 23 04:57:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:57:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:37 localhost sshd[308025]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:57:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:57:41 localhost podman[308027]: 2025-11-23 09:57:41.197142663 +0000 UTC m=+0.096873593 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 23 04:57:41 localhost podman[308027]: 2025-11-23 09:57:41.227938933 +0000 UTC m=+0.127669843 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 04:57:41 localhost podman[308028]: 2025-11-23 09:57:41.242968594 +0000 UTC m=+0.140920396 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:57:41 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:57:41 localhost podman[308028]: 2025-11-23 09:57:41.255339831 +0000 UTC m=+0.153291673 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:57:41 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:57:41 localhost podman[240144]: time="2025-11-23T09:57:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:57:41 localhost podman[240144]: @ - - [23/Nov/2025:09:57:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:57:41 localhost podman[308029]: 2025-11-23 09:57:41.347391702 +0000 UTC m=+0.244550692 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:57:41 localhost podman[240144]: @ - - [23/Nov/2025:09:57:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1"
Nov 23 04:57:41 localhost podman[308029]: 2025-11-23 09:57:41.45803837 +0000 UTC m=+0.355197320 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:57:41 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:57:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:42.751 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:42Z, description=, device_id=0f8464a3-f81d-4757-a6d6-f5dc7dd25ed6, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0e940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0e160>], id=c3ccdaac-a123-4c73-8200-8c4e8c217830, ip_allocation=immediate, mac_address=fa:16:3e:0d:b0:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=349, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:57:42Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:57:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:42.764 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:57:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:42.766 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:57:42 localhost podman[308109]: 2025-11-23 09:57:42.986952265 +0000 UTC m=+0.065837627 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:57:42 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:57:42 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:42 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:43.535 262721 INFO neutron.agent.dhcp.agent [None req-cf9af6d3-b8f1-4a6b-af84-59edfe4dece5 - - - - - -] DHCP configuration for ports {'c3ccdaac-a123-4c73-8200-8c4e8c217830'} is completed#033[00m
Nov 23 04:57:43 localhost systemd[1]: tmp-crun.orgksr.mount: Deactivated successfully.
Nov 23 04:57:43 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:57:43 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:43 localhost podman[308145]: 2025-11-23 09:57:43.709582266 +0000 UTC m=+0.073334471 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 04:57:43 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:57:43 localhost podman[308160]: 2025-11-23 09:57:43.830780663 +0000 UTC m=+0.094954751 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:57:43 localhost podman[308160]: 2025-11-23 09:57:43.839923192 +0000 UTC m=+0.104097270 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:57:43 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:57:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:45 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:45.539 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:57:45Z, description=, device_id=d7833880-0891-4456-a71b-58120feefed8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9e790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9e4f0>], id=520ad2e8-e035-4578-bc1f-c7ac751f42fb, ip_allocation=immediate, mac_address=fa:16:3e:a1:6a:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=351, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:57:45Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:57:45 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:57:45 localhost podman[308202]: 2025-11-23 09:57:45.736716441 +0000 UTC m=+0.055307710 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:57:45 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:45 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:45 localhost ovn_metadata_agent[159423]: 2025-11-23 09:57:45.767 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:57:46 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:57:46.048 262721 INFO neutron.agent.dhcp.agent [None req-8dc18ba3-8e03-4285-bae1-57502611e540 - - - - - -] DHCP configuration for ports {'520ad2e8-e035-4578-bc1f-c7ac751f42fb'} is completed#033[00m
Nov 23 04:57:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: ERROR   09:57:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: ERROR   09:57:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: ERROR   09:57:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: ERROR   09:57:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: ERROR   09:57:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:57:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:57:54 localhost neutron_sriov_agent[255613]: 2025-11-23 09:57:54.035 2 INFO neutron.agent.securitygroups_rpc [None req-c244d218-e6b7-4260-9702-7bb508c7ef68 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m
Nov 23 04:57:54 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:57:54 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:57:54 localhost podman[308240]: 2025-11-23 09:57:54.674627732 +0000 UTC m=+0.061309163 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:57:54 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:57:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:57:55 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e90 e90: 6 total, 6 up, 6 in
Nov 23 04:57:58 localhost neutron_sriov_agent[255613]: 2025-11-23 09:57:58.470 2 INFO neutron.agent.securitygroups_rpc [None req-d68d9a06-c0e6-4bcf-9e27-a376d467ec2a 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m
Nov 23 04:57:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 04:58:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/650142256' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 04:58:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 04:58:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/650142256' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 04:58:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e91 e91: 6 total, 6 up, 6 in
Nov 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:58:02 localhost podman[308262]: 2025-11-23 09:58:02.183025912 +0000 UTC m=+0.080909758 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:58:02 localhost podman[308262]: 2025-11-23 09:58:02.197013764 +0000 UTC m=+0.094897620 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 04:58:02 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:58:02 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:02.252 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:01Z, description=, device_id=caa365d3-aa93-4c7c-a692-c3fee4872fc2, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0e340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0ecd0>], id=361d45f4-c351-404d-99f6-3e2f73dfea9d, ip_allocation=immediate, mac_address=fa:16:3e:87:75:8a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=477, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:58:01Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:58:02 localhost podman[308261]: 2025-11-23 09:58:02.2841222 +0000 UTC m=+0.184995638 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 23 04:58:02 localhost podman[308261]: 2025-11-23 09:58:02.300292391 +0000 UTC m=+0.201165869 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 04:58:02 localhost podman[308263]: 2025-11-23 09:58:02.338438781 +0000 UTC m=+0.233246063 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 04:58:02 localhost podman[308263]: 2025-11-23 09:58:02.347280852 +0000 UTC m=+0.242088144 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 04:58:02 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:58:02 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:58:02 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:58:02 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:02 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:02 localhost podman[308341]: 2025-11-23 09:58:02.477200786 +0000 UTC m=+0.063701189 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 04:58:02 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:02.786 262721 INFO neutron.agent.dhcp.agent [None req-6fc467bf-e4d1-4ff5-be08-6f4d6dca29ac - - - - - -] DHCP configuration for ports {'361d45f4-c351-404d-99f6-3e2f73dfea9d'} is completed#033[00m
Nov 23 04:58:03 localhost nova_compute[281613]: 2025-11-23 09:58:03.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:03 localhost nova_compute[281613]: 2025-11-23 09:58:03.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 04:58:03 localhost nova_compute[281613]: 2025-11-23 09:58:03.032 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 04:58:04 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:04.283 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:58:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.275 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.276 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.301 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.483 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.484 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.491 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.491 281617 INFO nova.compute.claims [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Claim successful on node np0005532586.localdomain#033[00m
Nov 23 04:58:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:05.522 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:05Z, description=, device_id=a142a3f2-a258-4397-9ea5-84e0cd42ff93, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cfe190>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cfeee0>], id=d3cdf672-67cf-47e6-a0e4-daddced0037d, ip_allocation=immediate, mac_address=fa:16:3e:8f:dc:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=496, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:58:05Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:58:05 localhost nova_compute[281613]: 2025-11-23 09:58:05.639 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:05 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 04:58:05 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:05 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:05 localhost podman[308381]: 2025-11-23 09:58:05.749025863 +0000 UTC m=+0.073953209 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:58:06 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:06.058 262721 INFO neutron.agent.dhcp.agent [None req-b16f9814-b3be-4e63-9757-d41d4f26a070 - - - - - -] DHCP configuration for ports {'d3cdf672-67cf-47e6-a0e4-daddced0037d'} is completed#033[00m
Nov 23 04:58:06 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:06 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1362528200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.109 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.117 281617 DEBUG nova.compute.provider_tree [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.156 281617 DEBUG nova.scheduler.client.report [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.179 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.180 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.255 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.256 281617 DEBUG nova.network.neutron [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.285 281617 INFO nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.317 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.414 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.416 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.417 281617 INFO nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Creating image(s)#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.461 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.510 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.561 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.566 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:06 localhost nova_compute[281613]: 2025-11-23 09:58:06.567 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:07 localhost nova_compute[281613]: 2025-11-23 09:58:07.294 281617 DEBUG nova.virt.libvirt.imagebackend [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Image locations are: [{'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/c5806483-57a8-4254-b41b-254b888c8606/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://46550e70-79cb-5f55-bf6d-1204b97e083b/images/c5806483-57a8-4254-b41b-254b888c8606/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 23 04:58:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 e92: 6 total, 6 up, 6 in
Nov 23 04:58:07 localhost nova_compute[281613]: 2025-11-23 09:58:07.628 281617 WARNING oslo_policy.policy [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 23 04:58:07 localhost nova_compute[281613]: 2025-11-23 09:58:07.629 281617 WARNING oslo_policy.policy [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 23 04:58:07 localhost nova_compute[281613]: 2025-11-23 09:58:07.633 281617 DEBUG nova.policy [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9a28cb0574d148bf982a2a1a0b495020', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 04:58:08 localhost sshd[308477]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.315 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.396 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.399 281617 DEBUG nova.virt.images [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] c5806483-57a8-4254-b41b-254b888c8606 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.402 281617 DEBUG nova.privsep.utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.403 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.608 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.part /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.613 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.703 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a.converted --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.705 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.738 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:08 localhost nova_compute[281613]: 2025-11-23 09:58:08.742 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:09.266 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:09.266 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:09.267 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.361 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.476 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] resizing rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.571 281617 DEBUG nova.network.neutron [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Successfully updated port: 27d340a7-60a4-4a73-9f16-bae5ab3411da _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.632 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.632 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquired lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.633 281617 DEBUG nova.network.neutron [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.642 281617 DEBUG nova.objects.instance [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lazy-loading 'migration_context' on Instance uuid 76d6f171-13c9-4730-8ed3-ab467ef6831a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.662 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.663 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Ensure instance console log exists: /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.664 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.664 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.665 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:09 localhost nova_compute[281613]: 2025-11-23 09:58:09.809 281617 DEBUG nova.network.neutron [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.031 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.032 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.140 281617 DEBUG nova.compute.manager [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-changed-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.141 281617 DEBUG nova.compute.manager [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Refreshing instance network info cache due to event network-changed-27d340a7-60a4-4a73-9f16-bae5ab3411da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.141 281617 DEBUG oslo_concurrency.lockutils [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.401 281617 DEBUG nova.network.neutron [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.436 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Releasing lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.437 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance network_info: |[{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.437 281617 DEBUG oslo_concurrency.lockutils [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.438 281617 DEBUG nova.network.neutron [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Refreshing network info cache for port 27d340a7-60a4-4a73-9f16-bae5ab3411da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.444 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Start _get_guest_xml network_info=[{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=<?>,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T09:56:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'image_id': 'c5806483-57a8-4254-b41b-254b888c8606'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.450 281617 WARNING nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.453 281617 DEBUG nova.virt.libvirt.host [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Searching host: 'np0005532586.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.454 281617 DEBUG nova.virt.libvirt.host [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.456 281617 DEBUG nova.virt.libvirt.host [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Searching host: 'np0005532586.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.457 281617 DEBUG nova.virt.libvirt.host [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.458 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.458 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T09:56:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43b374b4-75d9-47f9-aa6b-ddb1a45f7c04',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=<?>,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T09:56:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.459 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.459 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.460 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.460 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.460 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.461 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.461 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.462 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.462 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.463 281617 DEBUG nova.virt.hardware [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.469 281617 DEBUG nova.privsep.utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.469 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:10 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:10.578 2 INFO neutron.agent.securitygroups_rpc [req-3532c496-51d7-40c7-b3da-c0e7be1692a4 req-d87dead5-02e8-46e7-bd25-42d652af07f6 b79ba98acc3c4b3580a3847feb119c9b 103398a293414a3081333eb24455a6bd - - default default] Security group rule updated ['280efa91-c004-412c-b87a-91a6eef9493c']#033[00m
Nov 23 04:58:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 04:58:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3139089852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 04:58:10 localhost nova_compute[281613]: 2025-11-23 09:58:10.972 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.008 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.013 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.032 281617 DEBUG nova.network.neutron [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updated VIF entry in instance network info cache for port 27d340a7-60a4-4a73-9f16-bae5ab3411da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.033 281617 DEBUG nova.network.neutron [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.037 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.047 281617 DEBUG oslo_concurrency.lockutils [req-7918b487-53dc-4f56-ab75-9ba83f6f40ef req-54047789-7c79-4419-9c7b-ae1f048ccf5e b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:58:11 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:11.072 2 INFO neutron.agent.securitygroups_rpc [req-ce34b552-7369-4724-9e28-4e57bb3059bd req-8a77dff8-1df4-4326-b30f-4088438850bd b79ba98acc3c4b3580a3847feb119c9b 103398a293414a3081333eb24455a6bd - - default default] Security group rule updated ['280efa91-c004-412c-b87a-91a6eef9493c']#033[00m
Nov 23 04:58:11 localhost podman[240144]: time="2025-11-23T09:58:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:58:11 localhost podman[240144]: @ - - [23/Nov/2025:09:58:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:58:11 localhost podman[240144]: @ - - [23/Nov/2025:09:58:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19197 "" "Go-http-client/1.1"
Nov 23 04:58:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 04:58:11 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1218640482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.536 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.538 281617 DEBUG nova.virt.libvirt.vif [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:06Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.539 281617 DEBUG nova.network.os_vif_util [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.540 281617 DEBUG nova.network.os_vif_util [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.543 281617 DEBUG nova.objects.instance [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 76d6f171-13c9-4730-8ed3-ab467ef6831a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.557 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] End _get_guest_xml xml=<domain type="kvm">
Nov 23 04:58:11 localhost nova_compute[281613]:  <uuid>76d6f171-13c9-4730-8ed3-ab467ef6831a</uuid>
Nov 23 04:58:11 localhost nova_compute[281613]:  <name>instance-00000006</name>
Nov 23 04:58:11 localhost nova_compute[281613]:  <memory>131072</memory>
Nov 23 04:58:11 localhost nova_compute[281613]:  <vcpu>1</vcpu>
Nov 23 04:58:11 localhost nova_compute[281613]:  <metadata>
Nov 23 04:58:11 localhost nova_compute[281613]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-151326874</nova:name>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:creationTime>2025-11-23 09:58:10</nova:creationTime>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:flavor name="m1.nano">
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:memory>128</nova:memory>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:disk>1</nova:disk>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:swap>0</nova:swap>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:vcpus>1</nova:vcpus>
Nov 23 04:58:11 localhost nova_compute[281613]:      </nova:flavor>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:owner>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:user uuid="9a28cb0574d148bf982a2a1a0b495020">tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member</nova:user>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:project uuid="a2148c18d8f24a6db12dc22c787e8b2e">tempest-LiveAutoBlockMigrationV225Test-1734069518</nova:project>
Nov 23 04:58:11 localhost nova_compute[281613]:      </nova:owner>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:root type="image" uuid="c5806483-57a8-4254-b41b-254b888c8606"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <nova:ports>
Nov 23 04:58:11 localhost nova_compute[281613]:        <nova:port uuid="27d340a7-60a4-4a73-9f16-bae5ab3411da">
Nov 23 04:58:11 localhost nova_compute[281613]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 23 04:58:11 localhost nova_compute[281613]:        </nova:port>
Nov 23 04:58:11 localhost nova_compute[281613]:      </nova:ports>
Nov 23 04:58:11 localhost nova_compute[281613]:    </nova:instance>
Nov 23 04:58:11 localhost nova_compute[281613]:  </metadata>
Nov 23 04:58:11 localhost nova_compute[281613]:  <sysinfo type="smbios">
Nov 23 04:58:11 localhost nova_compute[281613]:    <system>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="manufacturer">RDO</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="product">OpenStack Compute</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="serial">76d6f171-13c9-4730-8ed3-ab467ef6831a</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="uuid">76d6f171-13c9-4730-8ed3-ab467ef6831a</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:      <entry name="family">Virtual Machine</entry>
Nov 23 04:58:11 localhost nova_compute[281613]:    </system>
Nov 23 04:58:11 localhost nova_compute[281613]:  </sysinfo>
Nov 23 04:58:11 localhost nova_compute[281613]:  <os>
Nov 23 04:58:11 localhost nova_compute[281613]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 04:58:11 localhost nova_compute[281613]:    <boot dev="hd"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <smbios mode="sysinfo"/>
Nov 23 04:58:11 localhost nova_compute[281613]:  </os>
Nov 23 04:58:11 localhost nova_compute[281613]:  <features>
Nov 23 04:58:11 localhost nova_compute[281613]:    <acpi/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <apic/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <vmcoreinfo/>
Nov 23 04:58:11 localhost nova_compute[281613]:  </features>
Nov 23 04:58:11 localhost nova_compute[281613]:  <clock offset="utc">
Nov 23 04:58:11 localhost nova_compute[281613]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <timer name="hpet" present="no"/>
Nov 23 04:58:11 localhost nova_compute[281613]:  </clock>
Nov 23 04:58:11 localhost nova_compute[281613]:  <cpu mode="host-model" match="exact">
Nov 23 04:58:11 localhost nova_compute[281613]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 04:58:11 localhost nova_compute[281613]:  </cpu>
Nov 23 04:58:11 localhost nova_compute[281613]:  <devices>
Nov 23 04:58:11 localhost nova_compute[281613]:    <disk type="network" device="disk">
Nov 23 04:58:11 localhost nova_compute[281613]:      <driver type="raw" cache="none"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <source protocol="rbd" name="vms/76d6f171-13c9-4730-8ed3-ab467ef6831a_disk">
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.103" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.104" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.105" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      </source>
Nov 23 04:58:11 localhost nova_compute[281613]:      <auth username="openstack">
Nov 23 04:58:11 localhost nova_compute[281613]:        <secret type="ceph" uuid="46550e70-79cb-5f55-bf6d-1204b97e083b"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      </auth>
Nov 23 04:58:11 localhost nova_compute[281613]:      <target dev="vda" bus="virtio"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </disk>
Nov 23 04:58:11 localhost nova_compute[281613]:    <disk type="network" device="cdrom">
Nov 23 04:58:11 localhost nova_compute[281613]:      <driver type="raw" cache="none"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <source protocol="rbd" name="vms/76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config">
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.103" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.104" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:        <host name="172.18.0.105" port="6789"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      </source>
Nov 23 04:58:11 localhost nova_compute[281613]:      <auth username="openstack">
Nov 23 04:58:11 localhost nova_compute[281613]:        <secret type="ceph" uuid="46550e70-79cb-5f55-bf6d-1204b97e083b"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      </auth>
Nov 23 04:58:11 localhost nova_compute[281613]:      <target dev="sda" bus="sata"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </disk>
Nov 23 04:58:11 localhost nova_compute[281613]:    <interface type="ethernet">
Nov 23 04:58:11 localhost nova_compute[281613]:      <mac address="fa:16:3e:fe:c3:5c"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <model type="virtio"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <mtu size="1442"/>
Nov 23 04:58:11 localhost nova_compute[281613]:      <target dev="tap27d340a7-60"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </interface>
Nov 23 04:58:11 localhost nova_compute[281613]:    <serial type="pty">
Nov 23 04:58:11 localhost nova_compute[281613]:      <log file="/var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/console.log" append="off"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </serial>
Nov 23 04:58:11 localhost nova_compute[281613]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <video>
Nov 23 04:58:11 localhost nova_compute[281613]:      <model type="virtio"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </video>
Nov 23 04:58:11 localhost nova_compute[281613]:    <input type="tablet" bus="usb"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <rng model="virtio">
Nov 23 04:58:11 localhost nova_compute[281613]:      <backend model="random">/dev/urandom</backend>
Nov 23 04:58:11 localhost nova_compute[281613]:    </rng>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <controller type="usb" index="0"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    <memballoon model="virtio">
Nov 23 04:58:11 localhost nova_compute[281613]:      <stats period="10"/>
Nov 23 04:58:11 localhost nova_compute[281613]:    </memballoon>
Nov 23 04:58:11 localhost nova_compute[281613]:  </devices>
Nov 23 04:58:11 localhost nova_compute[281613]: </domain>
Nov 23 04:58:11 localhost nova_compute[281613]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.559 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Preparing to wait for external event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.560 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.560 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.560 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.562 281617 DEBUG nova.virt.libvirt.vif [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T09:58:06Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.562 281617 DEBUG nova.network.os_vif_util [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.563 281617 DEBUG nova.network.os_vif_util [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.564 281617 DEBUG os_vif [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.616 281617 DEBUG ovsdbapp.backend.ovs_idl [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.617 281617 DEBUG ovsdbapp.backend.ovs_idl [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.617 281617 DEBUG ovsdbapp.backend.ovs_idl [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.618 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.619 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.620 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.621 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.623 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.627 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.650 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.650 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.651 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 04:58:11 localhost nova_compute[281613]: 2025-11-23 09:58:11.652 281617 INFO oslo.privsep.daemon [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp7rvi5cai/privsep.sock']#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.034 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.034 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.036 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.052 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.052 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.053 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.053 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.054 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:58:12 localhost systemd[1]: tmp-crun.Wk6e2f.mount: Deactivated successfully.
Nov 23 04:58:12 localhost podman[308671]: 2025-11-23 09:58:12.176487085 +0000 UTC m=+0.084874796 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 23 04:58:12 localhost podman[308672]: 2025-11-23 09:58:12.185673376 +0000 UTC m=+0.089635626 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 04:58:12 localhost podman[308672]: 2025-11-23 09:58:12.193767577 +0000 UTC m=+0.097729837 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 04:58:12 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:58:12 localhost podman[308673]: 2025-11-23 09:58:12.24704575 +0000 UTC m=+0.146945589 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:12 localhost podman[308671]: 2025-11-23 09:58:12.260077316 +0000 UTC m=+0.168464997 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:58:12 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:58:12 localhost podman[308673]: 2025-11-23 09:58:12.310948113 +0000 UTC m=+0.210847952 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 04:58:12 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.352 281617 INFO oslo.privsep.daemon [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.241 308744 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.249 308744 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.252 308744 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.253 308744 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308744#033[00m
Nov 23 04:58:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:12 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2409305137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.515 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.610 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.611 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d340a7-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.612 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27d340a7-60, col_values=(('external_ids', {'iface-id': '27d340a7-60a4-4a73-9f16-bae5ab3411da', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c3:5c', 'vm-uuid': '76d6f171-13c9-4730-8ed3-ab467ef6831a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.658 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.661 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.666 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.667 281617 INFO os_vif [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60')#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.724 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.724 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.725 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] No VIF found with MAC fa:16:3e:fe:c3:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.726 281617 INFO nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Using config drive#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.763 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.888 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.890 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11925MB free_disk=41.78293991088867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.891 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.892 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.942 281617 INFO nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Creating config drive at /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config#033[00m
Nov 23 04:58:12 localhost nova_compute[281613]: 2025-11-23 09:58:12.948 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0r6wpqcp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:12 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:12.983 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:12Z, description=, device_id=489975b8-b64c-4318-bb24-a798d93046de, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf7520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf7f70>], id=f9b93e75-681a-4b9c-8a78-b51e0259bc38, ip_allocation=immediate, mac_address=fa:16:3e:04:77:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=561, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:58:12Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.074 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0r6wpqcp" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.120 281617 DEBUG nova.storage.rbd_utils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] rbd image 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.133 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:13 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 6 addresses
Nov 23 04:58:13 localhost systemd[1]: tmp-crun.KfVWAQ.mount: Deactivated successfully.
Nov 23 04:58:13 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:13 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:13 localhost podman[308815]: 2025-11-23 09:58:13.220154314 +0000 UTC m=+0.073079725 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.234 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Instance 76d6f171-13c9-4730-8ed3-ab467ef6831a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.235 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.235 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.375 281617 DEBUG oslo_concurrency.processutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config 76d6f171-13c9-4730-8ed3-ab467ef6831a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.376 281617 INFO nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deleting local config drive /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a/disk.config because it was imported into RBD.#033[00m
Nov 23 04:58:13 localhost systemd[1]: Started libvirt secret daemon.
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.457 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 04:58:13 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:13.480 262721 INFO neutron.agent.dhcp.agent [None req-df3287fe-d318-42d3-9c08-984468682b50 - - - - - -] DHCP configuration for ports {'f9b93e75-681a-4b9c-8a78-b51e0259bc38'} is completed#033[00m
Nov 23 04:58:13 localhost kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 23 04:58:13 localhost kernel: device tap27d340a7-60 entered promiscuous mode
Nov 23 04:58:13 localhost NetworkManager[5990]: <info>  [1763891893.5042] manager: (tap27d340a7-60): new Tun device (/org/freedesktop/NetworkManager/Devices/15)
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00041|binding|INFO|Claiming lport 27d340a7-60a4-4a73-9f16-bae5ab3411da for this chassis.
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00042|binding|INFO|27d340a7-60a4-4a73-9f16-bae5ab3411da: Claiming fa:16:3e:fe:c3:5c 10.100.0.7
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00043|binding|INFO|Claiming lport b779be61-5809-44a6-8395-bfdf8254b4cc for this chassis.
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00044|binding|INFO|b779be61-5809-44a6-8395-bfdf8254b4cc: Claiming fa:16:3e:e3:5d:7d 19.80.0.7
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.506 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:13 localhost systemd-udevd[308887]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.520 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:5d:7d 19.80.0.7'], port_security=['fa:16:3e:e3:5d:7d 19.80.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['27d340a7-60a4-4a73-9f16-bae5ab3411da'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-711090127', 'neutron:cidrs': '19.80.0.7/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-711090127', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0e3b2035-d1e3-4dc9-824d-c8c5d8c83090, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b779be61-5809-44a6-8395-bfdf8254b4cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.523 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c3:5c 10.100.0.7'], port_security=['fa:16:3e:fe:c3:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2092561411', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76d6f171-13c9-4730-8ed3-ab467ef6831a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2092561411', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=27d340a7-60a4-4a73-9f16-bae5ab3411da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.526 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b779be61-5809-44a6-8395-bfdf8254b4cc in datapath 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 bound to our chassis#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.529 159429 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3#033[00m
Nov 23 04:58:13 localhost NetworkManager[5990]: <info>  [1763891893.5347] device (tap27d340a7-60): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 04:58:13 localhost NetworkManager[5990]: <info>  [1763891893.5354] device (tap27d340a7-60): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.550 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.560 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00045|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da ovn-installed in OVS
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00046|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da up in Southbound
Nov 23 04:58:13 localhost ovn_controller[153786]: 2025-11-23T09:58:13Z|00047|binding|INFO|Setting lport b779be61-5809-44a6-8395-bfdf8254b4cc up in Southbound
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.563 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:13 localhost systemd-machined[203166]: New machine qemu-1-instance-00000006.
Nov 23 04:58:13 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.660 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.661 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.685 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.709 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.747 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.940 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[db64250a-0872-4ef0-86ac-3cd61a97ee0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.941 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8cd987c4-71 in ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.943 262865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8cd987c4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.943 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[a221f731-d346-4383-a2f6-a34ca4e8e087]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.944 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fcb02f-8a43-443c-9c9d-fc2dd145b00b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.973 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763891893.9724464, 76d6f171-13c9-4730-8ed3-ab467ef6831a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 04:58:13 localhost nova_compute[281613]: 2025-11-23 09:58:13.974 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Started (Lifecycle Event)#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.980 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[034f9f21-823e-47ef-9996-6a94a2f94b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.994 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[b3816cdb-2609-4ad7-9ca5-c3bbc9da83ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:13 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:13.997 159429 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmprx77qv9i/privsep.sock']#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.000 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.015 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763891893.9726462, 76d6f171-13c9-4730-8ed3-ab467ef6831a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.015 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Paused (Lifecycle Event)#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.040 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.045 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.069 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 04:58:14 localhost podman[308967]: 2025-11-23 09:58:14.103263473 +0000 UTC m=+0.102602630 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 04:58:14 localhost podman[308967]: 2025-11-23 09:58:14.119997249 +0000 UTC m=+0.119336446 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:14 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:58:14 localhost systemd[1]: tmp-crun.LeeYS5.mount: Deactivated successfully.
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.180 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:14 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/227347488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.204 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.211 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.273 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updated inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.275 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.275 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.309 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.309 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.673 159429 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.674 159429 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprx77qv9i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.571 308993 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.577 308993 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.581 308993 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.581 308993 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308993#033[00m
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.676 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[abc22a8e-38fa-431c-9de5-17a5243c0d24]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:14 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:14.755 262721 INFO neutron.agent.linux.ip_lib [None req-de23e539-1679-4cf9-9215-c019239babbf - - - - - -] Device tapd3fbb916-4d cannot be used as it has no MAC address#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.778 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:14 localhost kernel: device tapd3fbb916-4d entered promiscuous mode
Nov 23 04:58:14 localhost NetworkManager[5990]: <info>  [1763891894.7868] manager: (tapd3fbb916-4d): new Generic device (/org/freedesktop/NetworkManager/Devices/16)
Nov 23 04:58:14 localhost ovn_controller[153786]: 2025-11-23T09:58:14Z|00048|binding|INFO|Claiming lport d3fbb916-4dec-4a6b-8718-6271b2e70b14 for this chassis.
Nov 23 04:58:14 localhost ovn_controller[153786]: 2025-11-23T09:58:14Z|00049|binding|INFO|d3fbb916-4dec-4a6b-8718-6271b2e70b14: Claiming unknown
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.790 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:14 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:14.800 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57d9e088e75b4a3482d0e3a02bcce5be', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13c32b33-300d-4a8e-8e29-21d478bcccf9, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=d3fbb916-4dec-4a6b-8718-6271b2e70b14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:14 localhost ovn_controller[153786]: 2025-11-23T09:58:14Z|00050|binding|INFO|Setting lport d3fbb916-4dec-4a6b-8718-6271b2e70b14 ovn-installed in OVS
Nov 23 04:58:14 localhost ovn_controller[153786]: 2025-11-23T09:58:14Z|00051|binding|INFO|Setting lport d3fbb916-4dec-4a6b-8718-6271b2e70b14 up in Southbound
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.869 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:14 localhost nova_compute[281613]: 2025-11-23 09:58:14.925 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.219 308993 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.219 308993 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.219 308993 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:15 localhost nova_compute[281613]: 2025-11-23 09:58:15.224 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:15 localhost nova_compute[281613]: 2025-11-23 09:58:15.292 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:15 localhost nova_compute[281613]: 2025-11-23 09:58:15.293 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:15 localhost nova_compute[281613]: 2025-11-23 09:58:15.294 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:15 localhost ovn_controller[153786]: 2025-11-23T09:58:15Z|00052|memory|INFO|peak resident set size grew 62% in last 2261.3 seconds, from 13008 kB to 21112 kB
Nov 23 04:58:15 localhost ovn_controller[153786]: 2025-11-23T09:58:15Z|00053|memory|INFO|idl-cells-OVN_Southbound:10065 idl-cells-Open_vSwitch:1155 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:321 lflow-cache-entries-cache-matches:273 lflow-cache-size-KB:1285 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:618 ofctrl_installed_flow_usage-KB:452 ofctrl_sb_flow_ref_usage-KB:230
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.755 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e4d171-e1c1-4ea0-bf70-34a685f399c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost NetworkManager[5990]: <info>  [1763891895.7812] manager: (tap8cd987c4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/17)
Nov 23 04:58:15 localhost systemd-udevd[308886]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.785 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[294724fc-66ef-4582-a16e-0c0101d3b1c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.820 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec1f394-6ba4-4035-8227-199de88fcf2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.826 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fdc631-b6a5-47df-b6ae-afa9631b66d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8cd987c4-71: link becomes ready
Nov 23 04:58:15 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8cd987c4-70: link becomes ready
Nov 23 04:58:15 localhost NetworkManager[5990]: <info>  [1763891895.8557] device (tap8cd987c4-70): carrier: link connected
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.862 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[81be424b-e8a3-4119-acc6-b992e5673988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost podman[309066]: 
Nov 23 04:58:15 localhost podman[309066]: 2025-11-23 09:58:15.88377636 +0000 UTC m=+0.097812809 container create 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.888 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebf9685-ad30-4877-8f15-a490e356aea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cd987c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:fe:e5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1183901, 'reachable_time': 17844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309096, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.908 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[675fd103-8d61-44fd-86e1-4df6f6e88050]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:e5b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1183901, 'tstamp': 1183901}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309097, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.929 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[23e908ff-3a4b-4104-a6a0-d226b32747e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8cd987c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:fe:e5:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1183901, 'reachable_time': 17844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309098, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost podman[309066]: 2025-11-23 09:58:15.837248591 +0000 UTC m=+0.051285060 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:58:15 localhost systemd[1]: Started libpod-conmon-30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e.scope.
Nov 23 04:58:15 localhost systemd[1]: tmp-crun.p1xPwj.mount: Deactivated successfully.
Nov 23 04:58:15 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:15.963 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e300055b-4972-425e-a2a1-e2825b260357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:15 localhost systemd[1]: Started libcrun container.
Nov 23 04:58:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fbe7e9ea349ed9312c6cf0466b27178d4662ccb430336d77979dd8507ad40e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:58:15 localhost podman[309066]: 2025-11-23 09:58:15.999571809 +0000 UTC m=+0.213608258 container init 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:58:16 localhost podman[309066]: 2025-11-23 09:58:16.010043364 +0000 UTC m=+0.224079813 container start 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 04:58:16 localhost dnsmasq[309109]: started, version 2.85 cachesize 150
Nov 23 04:58:16 localhost dnsmasq[309109]: DNS service limited to local subnets
Nov 23 04:58:16 localhost dnsmasq[309109]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:58:16 localhost dnsmasq[309109]: warning: no upstream servers configured
Nov 23 04:58:16 localhost dnsmasq-dhcp[309109]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 04:58:16 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/addn_hosts - 0 addresses
Nov 23 04:58:16 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/host
Nov 23 04:58:16 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/opts
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.037 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9450dc-be66-416b-b3f8-c6f3a4659586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.039 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd987c4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.039 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.040 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cd987c4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.077 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:16 localhost kernel: device tap8cd987c4-70 entered promiscuous mode
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.080 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.081 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8cd987c4-70, col_values=(('external_ids', {'iface-id': '6df03061-a46e-4f2d-b42f-4f149f759e31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.082 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:16 localhost ovn_controller[153786]: 2025-11-23T09:58:16Z|00054|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0)
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.092 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.093 159429 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.094 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c4810918-0b89-4635-a440-1b670fe2f453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.096 159429 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: global
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    log         /dev/log local0 debug
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    log-tag     haproxy-metadata-proxy-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    user        root
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    group       root
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    maxconn     1024
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    pidfile     /var/lib/neutron/external/pids/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.pid.haproxy
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    daemon
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: defaults
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    log global
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    mode http
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    option httplog
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    option dontlognull
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    option http-server-close
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    option forwardfor
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    retries                 3
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    timeout http-request    30s
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    timeout connect         30s
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    timeout client          32s
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    timeout server          32s
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    timeout http-keep-alive 30s
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: listen listener
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    bind 169.254.169.254:80
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]:    http-request add-header X-OVN-Network-ID 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.097 159429 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'env', 'PROCESS_TAG=haproxy-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8cd987c4-7e4e-467f-9ee2-d70cb75b87c3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 04:58:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:16.150 262721 INFO neutron.agent.dhcp.agent [None req-c6e47d42-fd72-4e10-a326-95853ef839dc - - - - - -] DHCP configuration for ports {'715327f3-58f8-4e31-a024-c2bfb15a8f1c'} is completed#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.427 281617 DEBUG nova.compute.manager [req-b3f45d2d-6f7b-47f0-ab1d-5d3145f975ec req-0246649b-6043-4471-a4f1-8c5283e5aaa3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.428 281617 DEBUG oslo_concurrency.lockutils [req-b3f45d2d-6f7b-47f0-ab1d-5d3145f975ec req-0246649b-6043-4471-a4f1-8c5283e5aaa3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.428 281617 DEBUG oslo_concurrency.lockutils [req-b3f45d2d-6f7b-47f0-ab1d-5d3145f975ec req-0246649b-6043-4471-a4f1-8c5283e5aaa3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.429 281617 DEBUG oslo_concurrency.lockutils [req-b3f45d2d-6f7b-47f0-ab1d-5d3145f975ec req-0246649b-6043-4471-a4f1-8c5283e5aaa3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.429 281617 DEBUG nova.compute.manager [req-b3f45d2d-6f7b-47f0-ab1d-5d3145f975ec req-0246649b-6043-4471-a4f1-8c5283e5aaa3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Processing event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.430 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.435 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763891896.435477, 76d6f171-13c9-4730-8ed3-ab467ef6831a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.436 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Resumed (Lifecycle Event)#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.438 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.454 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.455 281617 INFO nova.virt.libvirt.driver [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance spawned successfully.#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.456 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.461 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.480 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.486 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.487 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.487 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.488 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.489 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.490 281617 DEBUG nova.virt.libvirt.driver [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.563 281617 INFO nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Took 10.15 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.564 281617 DEBUG nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:16 localhost podman[309137]: 
Nov 23 04:58:16 localhost podman[309137]: 2025-11-23 09:58:16.592768299 +0000 UTC m=+0.096661257 container create a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.632 281617 INFO nova.compute.manager [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Took 11.22 seconds to build instance.#033[00m
Nov 23 04:58:16 localhost systemd[1]: Started libpod-conmon-a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864.scope.
Nov 23 04:58:16 localhost nova_compute[281613]: 2025-11-23 09:58:16.647 281617 DEBUG oslo_concurrency.lockutils [None req-ec6b23b5-bb7c-4201-bd7b-af3aa4f439c3 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:16 localhost systemd[1]: Started libcrun container.
Nov 23 04:58:16 localhost podman[309137]: 2025-11-23 09:58:16.548896053 +0000 UTC m=+0.052789051 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 04:58:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea9cc48a9ea1417d14d030cc5b6e9da9ce95a718d7ae6eb6738ffd9bf2b265c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:58:16 localhost podman[309137]: 2025-11-23 09:58:16.663445848 +0000 UTC m=+0.167338796 container init a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:58:16 localhost podman[309137]: 2025-11-23 09:58:16.676446032 +0000 UTC m=+0.180338980 container start a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:58:16 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [NOTICE]   (309155) : New worker (309157) forked
Nov 23 04:58:16 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [NOTICE]   (309155) : Loading success.
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.735 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 27d340a7-60a4-4a73-9f16-bae5ab3411da in datapath 81348c6d-951a-4399-8703-476056b57fe9 unbound from our chassis#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.739 159429 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81348c6d-951a-4399-8703-476056b57fe9#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.751 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb53aee-95f9-45cf-acbd-b6f30f9d6d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.752 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81348c6d-91 in ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.754 262865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81348c6d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.754 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[05c52514-9c29-44f2-b01c-7fb5308882fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.756 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[24096d49-70f4-4a0d-8e8d-452b82cd11b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.776 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[efc37dd8-1340-41d8-a980-f10695944c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.792 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[a97cfae6-1048-4a81-b2f5-4c420aa8d4b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.829 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[21bacd1c-ee0e-4e5f-a6b5-878dbb325c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost NetworkManager[5990]: <info>  [1763891896.8394] manager: (tap81348c6d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/18)
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.838 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c9196c4f-2508-47a3-a3a2-7b0c996bf884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost systemd-udevd[309075]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.879 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3a28f9-3b3c-4654-964e-d6cd82979a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.883 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[94595091-4350-4085-995f-4e212b30d249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap81348c6d-91: link becomes ready
Nov 23 04:58:16 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap81348c6d-90: link becomes ready
Nov 23 04:58:16 localhost NetworkManager[5990]: <info>  [1763891896.9107] device (tap81348c6d-90): carrier: link connected
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.918 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19741e-bf03-4aae-bad6-6d12a385c91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.948 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f60ab574-73e3-4a08-920c-c2ec5647eb21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81348c6d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:6f:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184007, 'reachable_time': 41397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309177, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:16 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:16.972 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f10d374c-3a93-43fb-bdc6-5979cd533c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:4c93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1184007, 'tstamp': 1184007}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309178, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.000 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[da35d81b-5c75-43c4-84b5-4edec45892b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81348c6d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:6f:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1184007, 'reachable_time': 41397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309179, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.046 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[aa971373-ecb7-457b-aa60-eaac2d198340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.138 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[29b1e2fb-f797-498c-9d6e-09a86a6c7632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.139 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81348c6d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.140 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.141 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81348c6d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.175 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost kernel: device tap81348c6d-90 entered promiscuous mode
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.182 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.185 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81348c6d-90, col_values=(('external_ids', {'iface-id': 'bb526e17-a505-43fd-a1af-511960f787ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.187 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost ovn_controller[153786]: 2025-11-23T09:58:17Z|00055|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0)
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.197 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.198 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.199 159429 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.546 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5ff4ad-715a-4349-9ff7-c33b2b8f4b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.548 159429 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: global
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    log         /dev/log local0 debug
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    log-tag     haproxy-metadata-proxy-81348c6d-951a-4399-8703-476056b57fe9
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    user        root
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    group       root
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    maxconn     1024
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    pidfile     /var/lib/neutron/external/pids/81348c6d-951a-4399-8703-476056b57fe9.pid.haproxy
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    daemon
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: defaults
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    log global
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    mode http
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    option httplog
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    option dontlognull
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    option http-server-close
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    option forwardfor
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    retries                 3
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    timeout http-request    30s
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    timeout connect         30s
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    timeout client          32s
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    timeout server          32s
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    timeout http-keep-alive 30s
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: listen listener
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    bind 169.254.169.254:80
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]:    http-request add-header X-OVN-Network-ID 81348c6d-951a-4399-8703-476056b57fe9
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 04:58:17 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:17.549 159429 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'env', 'PROCESS_TAG=haproxy-81348c6d-951a-4399-8703-476056b57fe9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81348c6d-951a-4399-8703-476056b57fe9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 04:58:17 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:17.552 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:16Z, description=, device_id=1c2cd0fd-e370-4286-b134-de0f86a91701, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7da60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7db50>], id=d2cb98cb-3856-4c82-9a64-f2a9d61a268f, ip_allocation=immediate, mac_address=fa:16:3e:75:81:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=600, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:58:17Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:58:17 localhost nova_compute[281613]: 2025-11-23 09:58:17.658 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:17 localhost podman[309208]: 2025-11-23 09:58:17.86709922 +0000 UTC m=+0.089421681 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 23 04:58:17 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 7 addresses
Nov 23 04:58:17 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:17 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:18 localhost podman[309249]: 
Nov 23 04:58:18 localhost podman[309249]: 2025-11-23 09:58:18.147690314 +0000 UTC m=+0.126118111 container create b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:18 localhost podman[309249]: 2025-11-23 09:58:18.078285731 +0000 UTC m=+0.056713578 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 04:58:18 localhost systemd[1]: Started libpod-conmon-b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451.scope.
Nov 23 04:58:18 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:18.224 262721 INFO neutron.agent.dhcp.agent [None req-3f3ff820-23ea-4b6a-b3dc-6b980ac1e8e3 - - - - - -] DHCP configuration for ports {'d2cb98cb-3856-4c82-9a64-f2a9d61a268f'} is completed#033[00m
Nov 23 04:58:18 localhost systemd[1]: Started libcrun container.
Nov 23 04:58:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7ac5fb65411fe5912c692b4f25f6ee34f60e7593e544133720487c7159556/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:58:18 localhost podman[309249]: 2025-11-23 09:58:18.249068139 +0000 UTC m=+0.227495946 container init b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:58:18 localhost podman[309249]: 2025-11-23 09:58:18.261033565 +0000 UTC m=+0.239461372 container start b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 04:58:18 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [NOTICE]   (309271) : New worker (309273) forked
Nov 23 04:58:18 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [NOTICE]   (309271) : Loading success.
Nov 23 04:58:18 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:18.348 159429 INFO neutron.agent.ovn.metadata.agent [-] Port d3fbb916-4dec-4a6b-8718-6271b2e70b14 in datapath e40b78ba-3bb8-4706-86c2-b7af5d0d6c67 unbound from our chassis#033[00m
Nov 23 04:58:18 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:18.351 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port ad92ac7f-a209-4ec0-818d-881d4a156279 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 04:58:18 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:18.352 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:58:18 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:18.354 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[10d7a2b4-48a8-4e05-a766-2645b11a860b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:18 localhost podman[309299]: 2025-11-23 09:58:18.552133486 +0000 UTC m=+0.079339375 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:18 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 6 addresses
Nov 23 04:58:18 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:18 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:18 localhost ovn_controller[153786]: 2025-11-23T09:58:18Z|00056|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0)
Nov 23 04:58:18 localhost ovn_controller[153786]: 2025-11-23T09:58:18Z|00057|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0)
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.688 281617 DEBUG nova.compute.manager [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.689 281617 DEBUG oslo_concurrency.lockutils [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.689 281617 DEBUG oslo_concurrency.lockutils [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.690 281617 DEBUG oslo_concurrency.lockutils [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.691 281617 DEBUG nova.compute.manager [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.691 281617 WARNING nova.compute.manager [req-ffc910c1-c123-46a4-86c9-96d371beff56 req-473c1549-e64e-4024-b18f-095cec10d3c3 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state None.#033[00m
Nov 23 04:58:18 localhost nova_compute[281613]: 2025-11-23 09:58:18.714 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:19 localhost systemd[1]: tmp-crun.5fohji.mount: Deactivated successfully.
Nov 23 04:58:19 localhost nova_compute[281613]: 2025-11-23 09:58:19.187 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:20 localhost nova_compute[281613]: 2025-11-23 09:58:20.233 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:20 localhost nova_compute[281613]: 2025-11-23 09:58:20.448 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Check if temp file /var/lib/nova/instances/tmpbkkhnjeq exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 23 04:58:20 localhost nova_compute[281613]: 2025-11-23 09:58:20.449 281617 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=13312,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 23 04:58:20 localhost nova_compute[281613]: 2025-11-23 09:58:20.451 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:20 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:20.649 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:20Z, description=, device_id=1c2cd0fd-e370-4286-b134-de0f86a91701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c31c70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c31c10>], id=50a9187b-d3d9-4d1b-b399-fdfa0639bd76, ip_allocation=immediate, mac_address=fa:16:3e:53:dd:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:58:12Z, description=, dns_domain=, id=e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1589500725-network, port_security_enabled=True, project_id=57d9e088e75b4a3482d0e3a02bcce5be, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57802, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=562, status=ACTIVE, subnets=['dfde38ba-1118-438b-abb5-04f96d67c3f1'], tags=[], tenant_id=57d9e088e75b4a3482d0e3a02bcce5be, updated_at=2025-11-23T09:58:13Z, vlan_transparent=None, network_id=e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, port_security_enabled=False, project_id=57d9e088e75b4a3482d0e3a02bcce5be, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=612, status=DOWN, tags=[], tenant_id=57d9e088e75b4a3482d0e3a02bcce5be, updated_at=2025-11-23T09:58:20Z on network e40b78ba-3bb8-4706-86c2-b7af5d0d6c67#033[00m
Nov 23 04:58:20 localhost podman[309338]: 2025-11-23 09:58:20.868225163 +0000 UTC m=+0.059773872 container kill 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 23 04:58:20 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/addn_hosts - 1 addresses
Nov 23 04:58:20 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/host
Nov 23 04:58:20 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/opts
Nov 23 04:58:21 localhost nova_compute[281613]: 2025-11-23 09:58:21.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:58:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:21.181 262721 INFO neutron.agent.dhcp.agent [None req-6fc41e79-be3c-45ab-be19-b7aab5ab9258 - - - - - -] DHCP configuration for ports {'50a9187b-d3d9-4d1b-b399-fdfa0639bd76'} is completed#033[00m
Nov 23 04:58:21 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:21.553 2 INFO neutron.agent.securitygroups_rpc [None req-513d9ac5-08dd-4555-997f-809230181da7 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group rule updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m
Nov 23 04:58:21 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:21.704 2 INFO neutron.agent.securitygroups_rpc [None req-7741ab62-6798-4d09-b205-555af43d015d 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group rule updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m
Nov 23 04:58:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:21.722 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:58:20Z, description=, device_id=1c2cd0fd-e370-4286-b134-de0f86a91701, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c24fd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c24f10>], id=50a9187b-d3d9-4d1b-b399-fdfa0639bd76, ip_allocation=immediate, mac_address=fa:16:3e:53:dd:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:58:12Z, description=, dns_domain=, id=e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1589500725-network, port_security_enabled=True, project_id=57d9e088e75b4a3482d0e3a02bcce5be, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57802, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=562, status=ACTIVE, subnets=['dfde38ba-1118-438b-abb5-04f96d67c3f1'], tags=[], tenant_id=57d9e088e75b4a3482d0e3a02bcce5be, updated_at=2025-11-23T09:58:13Z, vlan_transparent=None, network_id=e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, port_security_enabled=False, project_id=57d9e088e75b4a3482d0e3a02bcce5be, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=612, status=DOWN, tags=[], tenant_id=57d9e088e75b4a3482d0e3a02bcce5be, updated_at=2025-11-23T09:58:20Z on network e40b78ba-3bb8-4706-86c2-b7af5d0d6c67#033[00m
Nov 23 04:58:21 localhost systemd[1]: tmp-crun.MG4tvv.mount: Deactivated successfully.
Nov 23 04:58:22 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/addn_hosts - 1 addresses
Nov 23 04:58:22 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/host
Nov 23 04:58:22 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/opts
Nov 23 04:58:22 localhost podman[309374]: 2025-11-23 09:58:22.00204507 +0000 UTC m=+0.072759276 container kill 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:58:22 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:58:22.264 262721 INFO neutron.agent.dhcp.agent [None req-03da6c07-db97-4c48-bfbd-f3d44cea3209 - - - - - -] DHCP configuration for ports {'50a9187b-d3d9-4d1b-b399-fdfa0639bd76'} is completed#033[00m
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: ERROR   09:58:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: ERROR   09:58:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: ERROR   09:58:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: ERROR   09:58:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: ERROR   09:58:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:58:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:58:22 localhost nova_compute[281613]: 2025-11-23 09:58:22.570 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:58:22 localhost nova_compute[281613]: 2025-11-23 09:58:22.571 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:58:22 localhost nova_compute[281613]: 2025-11-23 09:58:22.578 281617 INFO nova.compute.rpcapi [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 23 04:58:22 localhost nova_compute[281613]: 2025-11-23 09:58:22.578 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:58:22 localhost nova_compute[281613]: 2025-11-23 09:58:22.660 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:25 localhost nova_compute[281613]: 2025-11-23 09:58:25.232 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:25 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:25.323 2 INFO neutron.agent.securitygroups_rpc [None req-ec9f2257-2897-484b-a0ca-c8a73a80ef4d 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.556 281617 DEBUG nova.compute.manager [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:26 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:26.557 2 INFO neutron.agent.securitygroups_rpc [req-dc62ce45-8668-47e6-9d5e-2f0b1764537e req-34d4dcd5-73f6-46e0-ba5e-aabbd18e768e 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group member updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.558 281617 DEBUG oslo_concurrency.lockutils [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.558 281617 DEBUG oslo_concurrency.lockutils [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.558 281617 DEBUG oslo_concurrency.lockutils [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.559 281617 DEBUG nova.compute.manager [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 04:58:26 localhost nova_compute[281613]: 2025-11-23 09:58:26.559 281617 DEBUG nova.compute.manager [req-5e89b611-3fcd-4115-aada-041f774e2e5b req-90d1deb2-0f4b-4636-9606-532013dc7266 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.662 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.798 281617 INFO nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Took 5.23 seconds for pre_live_migration on destination host np0005532584.localdomain.#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.799 281617 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.813 281617 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbkkhnjeq',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76d6f171-13c9-4730-8ed3-ab467ef6831a',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(ff77ba7c-b1ae-4e8d-9f48-992b8cf9317b),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.818 281617 DEBUG nova.objects.instance [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lazy-loading 'migration_context' on Instance uuid 76d6f171-13c9-4730-8ed3-ab467ef6831a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.819 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.821 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.822 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.834 281617 DEBUG nova.virt.libvirt.vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-23T09:58:16Z,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T09:58:16Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.835 281617 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.836 281617 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.837 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating guest XML with vif config: <interface type="ethernet">
Nov 23 04:58:27 localhost nova_compute[281613]:  <mac address="fa:16:3e:fe:c3:5c"/>
Nov 23 04:58:27 localhost nova_compute[281613]:  <model type="virtio"/>
Nov 23 04:58:27 localhost nova_compute[281613]:  <driver name="vhost" rx_queue_size="512"/>
Nov 23 04:58:27 localhost nova_compute[281613]:  <mtu size="1442"/>
Nov 23 04:58:27 localhost nova_compute[281613]:  <target dev="tap27d340a7-60"/>
Nov 23 04:58:27 localhost nova_compute[281613]: </interface>
Nov 23 04:58:27 localhost nova_compute[281613]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 23 04:58:27 localhost nova_compute[281613]: 2025-11-23 09:58:27.838 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.325 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.326 281617 INFO nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.420 281617 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 23 04:58:28 localhost sshd[309394]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.610 281617 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.611 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.612 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.612 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.613 281617 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.613 281617 WARNING nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state migrating.#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.614 281617 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-changed-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.614 281617 DEBUG nova.compute.manager [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Refreshing instance network info cache due to event network-changed-27d340a7-60a4-4a73-9f16-bae5ab3411da. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.614 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.615 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.615 281617 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Refreshing network info cache for port 27d340a7-60a4-4a73-9f16-bae5ab3411da _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.924 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 23 04:58:28 localhost nova_compute[281613]: 2025-11-23 09:58:28.925 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.283 281617 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updated VIF entry in instance network info cache for port 27d340a7-60a4-4a73-9f16-bae5ab3411da. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.285 281617 DEBUG nova.network.neutron [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Updating instance_info_cache with network_info: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005532584.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.311 281617 DEBUG oslo_concurrency.lockutils [req-f3075584-ffc7-4be5-afa5-6a4b6379c556 req-6011929f-7dfc-4449-9582-b5bfdfc50301 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-76d6f171-13c9-4730-8ed3-ab467ef6831a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.431 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.431 281617 DEBUG nova.virt.libvirt.migration [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.641 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763891909.6407402, 76d6f171-13c9-4730-8ed3-ab467ef6831a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.641 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Paused (Lifecycle Event)#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.664 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.669 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.692 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 23 04:58:29 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:29.789 2 INFO neutron.agent.securitygroups_rpc [None req-2d14bee2-d335-42d8-9b8c-ccacbe55654b 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m
Nov 23 04:58:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:29 localhost kernel: device tap27d340a7-60 left promiscuous mode
Nov 23 04:58:29 localhost NetworkManager[5990]: <info>  [1763891909.8332] device (tap27d340a7-60): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00058|binding|INFO|Releasing lport 27d340a7-60a4-4a73-9f16-bae5ab3411da from this chassis (sb_readonly=0)
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00059|binding|INFO|Setting lport 27d340a7-60a4-4a73-9f16-bae5ab3411da down in Southbound
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00060|binding|INFO|Releasing lport b779be61-5809-44a6-8395-bfdf8254b4cc from this chassis (sb_readonly=0)
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00061|binding|INFO|Setting lport b779be61-5809-44a6-8395-bfdf8254b4cc down in Southbound
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00062|binding|INFO|Removing iface tap27d340a7-60 ovn-installed in OVS
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.851 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00063|binding|INFO|Releasing lport 6df03061-a46e-4f2d-b42f-4f149f759e31 from this chassis (sb_readonly=0)
Nov 23 04:58:29 localhost ovn_controller[153786]: 2025-11-23T09:58:29Z|00064|binding|INFO|Releasing lport bb526e17-a505-43fd-a1af-511960f787ee from this chassis (sb_readonly=0)
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.860 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:5d:7d 19.80.0.7'], port_security=['fa:16:3e:e3:5d:7d 19.80.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['27d340a7-60a4-4a73-9f16-bae5ab3411da'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-711090127', 'neutron:cidrs': '19.80.0.7/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-711090127', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0e3b2035-d1e3-4dc9-824d-c8c5d8c83090, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=b779be61-5809-44a6-8395-bfdf8254b4cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.864 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c3:5c 10.100.0.7'], port_security=['fa:16:3e:fe:c3:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain,np0005532584.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'ade391ff-62a6-48e9-b6e8-1a8b190070d2'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2092561411', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76d6f171-13c9-4730-8ed3-ab467ef6831a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81348c6d-951a-4399-8703-476056b57fe9', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2092561411', 'neutron:project_id': 'a2148c18d8f24a6db12dc22c787e8b2e', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ff44a28d-1e1f-4163-b206-fdf77022bf0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1897b64f-0c37-45be-8353-f858f64309cd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=27d340a7-60a4-4a73-9f16-bae5ab3411da) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.866 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b779be61-5809-44a6-8395-bfdf8254b4cc in datapath 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 unbound from our chassis#033[00m
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.870 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.872 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e01c3f23-7483-4eff-8ea7-df0a99b57e2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:29 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:29.872 159429 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 namespace which is not needed anymore#033[00m
Nov 23 04:58:29 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 23 04:58:29 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 13.392s CPU time.
Nov 23 04:58:29 localhost systemd-machined[203166]: Machine qemu-1-instance-00000006 terminated.
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.891 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.905 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:29 localhost journal[229448]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/76d6f171-13c9-4730-8ed3-ab467ef6831a_disk: No such file or directory
Nov 23 04:58:29 localhost journal[229448]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/76d6f171-13c9-4730-8ed3-ab467ef6831a_disk: No such file or directory
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.983 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.991 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:29 localhost nova_compute[281613]: 2025-11-23 09:58:29.998 281617 DEBUG nova.virt.libvirt.guest [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:29.999 281617 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migration operation has completed#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:29.999 281617 INFO nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] _post_live_migration() is started..#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.017 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.018 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.018 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [NOTICE]   (309155) : haproxy version is 2.8.14-c23fe91
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [NOTICE]   (309155) : path to executable is /usr/sbin/haproxy
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [WARNING]  (309155) : Exiting Master process...
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [ALERT]    (309155) : Current worker (309157) exited with code 143 (Terminated)
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3[309151]: [WARNING]  (309155) : All workers exited. Exiting... (0)
Nov 23 04:58:30 localhost systemd[1]: libpod-a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864.scope: Deactivated successfully.
Nov 23 04:58:30 localhost podman[309434]: 2025-11-23 09:58:30.123354637 +0000 UTC m=+0.099344831 container died a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:58:30 localhost podman[309434]: 2025-11-23 09:58:30.177191706 +0000 UTC m=+0.153181880 container cleanup a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:58:30 localhost podman[309449]: 2025-11-23 09:58:30.207128762 +0000 UTC m=+0.066778232 container cleanup a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:58:30 localhost systemd[1]: libpod-conmon-a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864.scope: Deactivated successfully.
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.236 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost podman[309464]: 2025-11-23 09:58:30.295286117 +0000 UTC m=+0.092040802 container remove a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.300 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c2971b2c-1cce-4a22-b37a-2a23301fae61]: (4, ('Sun Nov 23 09:58:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 (a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864)\na194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864\nSun Nov 23 09:58:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 (a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864)\na194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.301 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ef32b4ce-ab10-4817-b961-f2fa5a741128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.303 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd987c4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.306 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost kernel: device tap8cd987c4-70 left promiscuous mode
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.320 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.324 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5269a8-8620-4c9f-b307-5f14055f70cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.336 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[86bf2f78-a145-48c7-862d-93488efa85a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.338 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[b83d8ed1-b2f6-455a-90c0-5ca4f012b509]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.351 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[8925f25e-50c9-4980-9e93-7f0a283b0dc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1183890, 'reachable_time': 24690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309485, 'error': None, 'target': 'ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.361 159535 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8cd987c4-7e4e-467f-9ee2-d70cb75b87c3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.362 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6aa0a3-92ca-4d81-a74d-c57cf2d8c6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.363 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 27d340a7-60a4-4a73-9f16-bae5ab3411da in datapath 81348c6d-951a-4399-8703-476056b57fe9 unbound from our chassis#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.366 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81348c6d-951a-4399-8703-476056b57fe9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.368 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4f05a9-50c5-4362-88dc-e7d24341b36f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.368 159429 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 namespace which is not needed anymore#033[00m
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [NOTICE]   (309271) : haproxy version is 2.8.14-c23fe91
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [NOTICE]   (309271) : path to executable is /usr/sbin/haproxy
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [WARNING]  (309271) : Exiting Master process...
Nov 23 04:58:30 localhost systemd[1]: libpod-b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451.scope: Deactivated successfully.
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [ALERT]    (309271) : Current worker (309273) exited with code 143 (Terminated)
Nov 23 04:58:30 localhost neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9[309267]: [WARNING]  (309271) : All workers exited. Exiting... (0)
Nov 23 04:58:30 localhost podman[309504]: 2025-11-23 09:58:30.575781298 +0000 UTC m=+0.081683839 container died b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:58:30 localhost podman[309504]: 2025-11-23 09:58:30.613288471 +0000 UTC m=+0.119190912 container cleanup b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:58:30 localhost podman[309517]: 2025-11-23 09:58:30.648761409 +0000 UTC m=+0.066537766 container cleanup b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:58:30 localhost systemd[1]: libpod-conmon-b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451.scope: Deactivated successfully.
Nov 23 04:58:30 localhost podman[309531]: 2025-11-23 09:58:30.707917152 +0000 UTC m=+0.073513656 container remove b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.712 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[1c439f1d-c72f-43ae-90c0-9640be25e32f]: (4, ('Sun Nov 23 09:58:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 (b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451)\nb54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451\nSun Nov 23 09:58:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 (b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451)\nb54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.714 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c74c75-0695-411a-bfc3-14841beb5b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.716 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81348c6d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.719 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost kernel: device tap81348c6d-90 left promiscuous mode
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.731 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.734 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7c5a6e-6fe4-4ceb-89c3-0ead3416240d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.754 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[95e898cf-cbce-4e26-a90b-1f5818bbbcf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.756 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[8bce3064-5b86-4a94-a446-ec3a4db4ffda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.775 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0d4c85-37fb-4d50-9c08-de75ae196edf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1183998, 'reachable_time': 22721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309555, 'error': None, 'target': 'ovnmeta-81348c6d-951a-4399-8703-476056b57fe9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.778 159535 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81348c6d-951a-4399-8703-476056b57fe9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 04:58:30 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:30.778 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[0eeb1e8b-1189-4b3a-87d2-ee001cfa48b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.795 281617 DEBUG nova.network.neutron [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Activated binding for port 27d340a7-60a4-4a73-9f16-bae5ab3411da and host np0005532584.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.795 281617 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.796 281617 DEBUG nova.virt.libvirt.vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T09:58:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-151326874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-151326874',id=6,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-23T09:58:16Z,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a2148c18d8f24a6db12dc22c787e8b2e',ramdisk_id='',reservation_id='r-6eghyq4d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1734069518',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1734069518-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T09:58:19Z,user_data=None,user_id='9a28cb0574d148bf982a2a1a0b495020',uuid=76d6f171-13c9-4730-8ed3-ab467ef6831a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.797 281617 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converting VIF {"id": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "address": "fa:16:3e:fe:c3:5c", "network": {"id": "81348c6d-951a-4399-8703-476056b57fe9", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1707444454-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a2148c18d8f24a6db12dc22c787e8b2e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d340a7-60", "ovs_interfaceid": "27d340a7-60a4-4a73-9f16-bae5ab3411da", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.798 281617 DEBUG nova.network.os_vif_util [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.799 281617 DEBUG os_vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.801 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.802 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d340a7-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.804 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.806 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.809 281617 INFO os_vif [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c3:5c,bridge_name='br-int',has_traffic_filtering=True,id=27d340a7-60a4-4a73-9f16-bae5ab3411da,network=Network(81348c6d-951a-4399-8703-476056b57fe9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap27d340a7-60')#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.810 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.810 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.811 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.811 281617 DEBUG nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.812 281617 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deleting instance files /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a_del#033[00m
Nov 23 04:58:30 localhost nova_compute[281613]: 2025-11-23 09:58:30.813 281617 INFO nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Deletion of /var/lib/nova/instances/76d6f171-13c9-4730-8ed3-ab467ef6831a_del complete#033[00m
Nov 23 04:58:31 localhost systemd[1]: var-lib-containers-storage-overlay-a5c7ac5fb65411fe5912c692b4f25f6ee34f60e7593e544133720487c7159556-merged.mount: Deactivated successfully.
Nov 23 04:58:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b54d87cf65498f6f9c6291d467b769529a1610e04625f9a92072cafe819f0451-userdata-shm.mount: Deactivated successfully.
Nov 23 04:58:31 localhost systemd[1]: run-netns-ovnmeta\x2d81348c6d\x2d951a\x2d4399\x2d8703\x2d476056b57fe9.mount: Deactivated successfully.
Nov 23 04:58:31 localhost systemd[1]: var-lib-containers-storage-overlay-1ea9cc48a9ea1417d14d030cc5b6e9da9ce95a718d7ae6eb6738ffd9bf2b265c-merged.mount: Deactivated successfully.
Nov 23 04:58:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a194213bb3681b6eb35fc55750026707e0c8e6c703e6bcc1e35169880f367864-userdata-shm.mount: Deactivated successfully.
Nov 23 04:58:31 localhost systemd[1]: run-netns-ovnmeta\x2d8cd987c4\x2d7e4e\x2d467f\x2d9ee2\x2dd70cb75b87c3.mount: Deactivated successfully.
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.227 281617 DEBUG nova.compute.manager [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.228 281617 DEBUG oslo_concurrency.lockutils [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.229 281617 DEBUG oslo_concurrency.lockutils [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.229 281617 DEBUG oslo_concurrency.lockutils [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.229 281617 DEBUG nova.compute.manager [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.230 281617 DEBUG nova.compute.manager [req-a850b166-091f-4ec4-ad77-acf309fd9851 req-61e24159-c601-4e34-8300-ed4a1d4a1a87 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-unplugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.886 281617 DEBUG nova.compute.manager [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.887 281617 DEBUG oslo_concurrency.lockutils [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.887 281617 DEBUG oslo_concurrency.lockutils [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.887 281617 DEBUG oslo_concurrency.lockutils [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.887 281617 DEBUG nova.compute.manager [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] No waiting events found dispatching network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 04:58:31 localhost nova_compute[281613]: 2025-11-23 09:58:31.888 281617 WARNING nova.compute.manager [req-7c19e84a-fba0-43b7-83b1-4c6233202b58 req-0c48e7e9-64f3-489d-93a4-f8d6c090cbf2 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Received unexpected event network-vif-plugged-27d340a7-60a4-4a73-9f16-bae5ab3411da for instance with vm_state active and task_state migrating.#033[00m
Nov 23 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:58:33 localhost podman[309558]: 2025-11-23 09:58:33.190003627 +0000 UTC m=+0.083262562 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:58:33 localhost podman[309557]: 2025-11-23 09:58:33.265174828 +0000 UTC m=+0.163464200 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 04:58:33 localhost podman[309558]: 2025-11-23 09:58:33.283208259 +0000 UTC m=+0.176467214 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 04:58:33 localhost podman[309556]: 2025-11-23 09:58:33.241321247 +0000 UTC m=+0.141356287 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 04:58:33 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:58:33 localhost podman[309557]: 2025-11-23 09:58:33.302116335 +0000 UTC m=+0.200405687 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 04:58:33 localhost podman[309556]: 2025-11-23 09:58:33.321871454 +0000 UTC m=+0.221906484 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, vcs-type=git)
Nov 23 04:58:33 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:58:33 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:58:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:33 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4212683681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.454937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913454981, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2462, "num_deletes": 251, "total_data_size": 3704725, "memory_usage": 3756952, "flush_reason": "Manual Compaction"}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913465845, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2394639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16086, "largest_seqno": 18543, "table_properties": {"data_size": 2385727, "index_size": 5545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19315, "raw_average_key_size": 21, "raw_value_size": 2367399, "raw_average_value_size": 2576, "num_data_blocks": 239, "num_entries": 919, "num_filter_entries": 919, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891729, "oldest_key_time": 1763891729, "file_creation_time": 1763891913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10955 microseconds, and 6099 cpu microseconds.
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.465891) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2394639 bytes OK
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.465913) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.467600) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.467621) EVENT_LOG_v1 {"time_micros": 1763891913467615, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.467642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3693768, prev total WAL file size 3693768, number of live WAL files 2.
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.468688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end)
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2338KB)], [27(15MB)]
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913468737, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18833580, "oldest_snapshot_seqno": -1}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12166 keys, 16882898 bytes, temperature: kUnknown
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913540875, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 16882898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16814978, "index_size": 36395, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 328841, "raw_average_key_size": 27, "raw_value_size": 16608820, "raw_average_value_size": 1365, "num_data_blocks": 1365, "num_entries": 12166, "num_filter_entries": 12166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763891913, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.541565) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 16882898 bytes
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.543310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.5 rd, 232.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 15.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(14.9) write-amplify(7.1) OK, records in: 12698, records dropped: 532 output_compression: NoCompression
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.543342) EVENT_LOG_v1 {"time_micros": 1763891913543329, "job": 14, "event": "compaction_finished", "compaction_time_micros": 72568, "compaction_time_cpu_micros": 44476, "output_level": 6, "num_output_files": 1, "total_output_size": 16882898, "num_input_records": 12698, "num_output_records": 12166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913544078, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763891913546258, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.468567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.546347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.546353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.546355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.546356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-09:58:33.546358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.871 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.872 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.873 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "76d6f171-13c9-4730-8ed3-ab467ef6831a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.894 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.894 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.895 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.895 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:58:33 localhost nova_compute[281613]: 2025-11-23 09:58:33.896 281617 DEBUG oslo_concurrency.processutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:34 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4042067926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.365 281617 DEBUG oslo_concurrency.processutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:58:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.516 281617 WARNING nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.518 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11755MB free_disk=41.712066650390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.518 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.518 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.544 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Migration for instance 76d6f171-13c9-4730-8ed3-ab467ef6831a refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.570 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.594 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Migration ff77ba7c-b1ae-4e8d-9f48-992b8cf9317b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.595 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.595 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:58:34 localhost nova_compute[281613]: 2025-11-23 09:58:34.643 281617 DEBUG oslo_concurrency.processutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:58:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:58:35 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1229603635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.093 281617 DEBUG oslo_concurrency.processutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.100 281617 DEBUG nova.compute.provider_tree [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.124 281617 DEBUG nova.scheduler.client.report [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.146 281617 DEBUG nova.compute.resource_tracker [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.147 281617 DEBUG oslo_concurrency.lockutils [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.158 281617 INFO nova.compute.manager [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Migrating instance to np0005532584.localdomain finished successfully.#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.238 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.251 281617 INFO nova.scheduler.client.report [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] Deleted allocation for migration ff77ba7c-b1ae-4e8d-9f48-992b8cf9317b#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.252 281617 DEBUG nova.virt.libvirt.driver [None req-d94990a3-adc9-4b79-97fb-3249301b5664 d490760b7f0f4361a67870276d80560d 9e0eb6249a0548c0ad772871741f0b5d - - default default] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 23 04:58:35 localhost nova_compute[281613]: 2025-11-23 09:58:35.803 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:58:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:40 localhost nova_compute[281613]: 2025-11-23 09:58:40.245 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:40 localhost nova_compute[281613]: 2025-11-23 09:58:40.805 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:41 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:41.280 2 INFO neutron.agent.securitygroups_rpc [None req-b6d2f56d-2805-44c2-9e36-7ffa8fc09e14 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m
Nov 23 04:58:41 localhost podman[240144]: time="2025-11-23T09:58:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:58:41 localhost podman[240144]: @ - - [23/Nov/2025:09:58:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158150 "" "Go-http-client/1.1"
Nov 23 04:58:41 localhost podman[240144]: @ - - [23/Nov/2025:09:58:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19683 "" "Go-http-client/1.1"
Nov 23 04:58:42 localhost nova_compute[281613]: 2025-11-23 09:58:42.962 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:42.965 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:58:42 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:42.966 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:58:43 localhost podman[309751]: 2025-11-23 09:58:43.183968267 +0000 UTC m=+0.075657385 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:58:43 localhost podman[309751]: 2025-11-23 09:58:43.263991309 +0000 UTC m=+0.155680427 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:58:43 localhost podman[309750]: 2025-11-23 09:58:43.277338083 +0000 UTC m=+0.174865900 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:58:43 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:58:43 localhost podman[309750]: 2025-11-23 09:58:43.288839657 +0000 UTC m=+0.186367474 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:58:43 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:43.296 2 INFO neutron.agent.securitygroups_rpc [None req-50406108-6fe1-4d79-842a-8b928e46e646 9a28cb0574d148bf982a2a1a0b495020 a2148c18d8f24a6db12dc22c787e8b2e - - default default] Security group member updated ['ff44a28d-1e1f-4163-b206-fdf77022bf0b']#033[00m
Nov 23 04:58:43 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:58:43 localhost podman[309749]: 2025-11-23 09:58:43.33736216 +0000 UTC m=+0.234468476 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 23 04:58:43 localhost podman[309749]: 2025-11-23 09:58:43.3692393 +0000 UTC m=+0.266345606 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 23 04:58:43 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:58:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:58:44 localhost podman[309815]: 2025-11-23 09:58:44.281118683 +0000 UTC m=+0.078208164 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 23 04:58:44 localhost podman[309815]: 2025-11-23 09:58:44.319617854 +0000 UTC m=+0.116707295 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:58:44 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:58:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:45 localhost nova_compute[281613]: 2025-11-23 09:58:45.000 281617 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763891909.9973745, 76d6f171-13c9-4730-8ed3-ab467ef6831a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 04:58:45 localhost nova_compute[281613]: 2025-11-23 09:58:45.000 281617 INFO nova.compute.manager [-] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] VM Stopped (Lifecycle Event)#033[00m
Nov 23 04:58:45 localhost nova_compute[281613]: 2025-11-23 09:58:45.019 281617 DEBUG nova.compute.manager [None req-9b397228-e662-49a1-872c-46e6be9717cf - - - - - -] [instance: 76d6f171-13c9-4730-8ed3-ab467ef6831a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 04:58:45 localhost nova_compute[281613]: 2025-11-23 09:58:45.246 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:45 localhost nova_compute[281613]: 2025-11-23 09:58:45.807 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:46 localhost ovn_metadata_agent[159423]: 2025-11-23 09:58:46.968 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:58:47 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 04:58:47 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:47 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:47 localhost podman[309853]: 2025-11-23 09:58:47.771689936 +0000 UTC m=+0.063041210 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 04:58:47 localhost nova_compute[281613]: 2025-11-23 09:58:47.825 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:50 localhost nova_compute[281613]: 2025-11-23 09:58:50.250 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:50 localhost nova_compute[281613]: 2025-11-23 09:58:50.809 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: ERROR   09:58:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: ERROR   09:58:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: ERROR   09:58:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: ERROR   09:58:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: ERROR   09:58:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:58:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:58:54 localhost nova_compute[281613]: 2025-11-23 09:58:54.301 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:54 localhost podman[309890]: 2025-11-23 09:58:54.321626661 +0000 UTC m=+0.104498602 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:58:54 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:58:54 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:58:54 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:58:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 04:58:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 04:58:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 04:58:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:58:55 localhost nova_compute[281613]: 2025-11-23 09:58:55.258 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:55 localhost nova_compute[281613]: 2025-11-23 09:58:55.811 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:58:59 localhost neutron_sriov_agent[255613]: 2025-11-23 09:58:59.116 2 INFO neutron.agent.securitygroups_rpc [None req-f36f5a6d-ca31-44d9-bac1-0308580f3e95 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m
Nov 23 04:58:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:00 localhost nova_compute[281613]: 2025-11-23 09:59:00.261 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:00 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:00.634 2 INFO neutron.agent.securitygroups_rpc [None req-2e659d4a-74ef-46b7-bd3b-2baf8d6d13fe 7f7875c0084c46fdb2e7b37e4fc44faf 253c88568a634476a6c1284eed6a9464 - - default default] Security group member updated ['a3350144-9b09-432b-a32e-ef84bb8bf494']#033[00m
Nov 23 04:59:00 localhost nova_compute[281613]: 2025-11-23 09:59:00.812 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e93 e93: 6 total, 6 up, 6 in
Nov 23 04:59:03 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:03.938 2 INFO neutron.agent.securitygroups_rpc [None req-9d56fe05-4ef8-4d55-837b-9cee7fc5dad7 4b677b000abe4b0687ff1afcd1016893 2a693c1f03094401b2a83bfa038e2d85 - - default default] Security group member updated ['e11e3507-78f9-4b55-80fe-2aa7bb5d486d']#033[00m
Nov 23 04:59:04 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:04 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:04 localhost podman[309925]: 2025-11-23 09:59:04.059652427 +0000 UTC m=+0.061754415 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:04 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:59:04 localhost systemd[1]: tmp-crun.PAYUrT.mount: Deactivated successfully.
Nov 23 04:59:04 localhost podman[309941]: 2025-11-23 09:59:04.201829025 +0000 UTC m=+0.101869630 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:59:04 localhost podman[309941]: 2025-11-23 09:59:04.214949683 +0000 UTC m=+0.114990348 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:59:04 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:59:04 localhost nova_compute[281613]: 2025-11-23 09:59:04.337 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:04 localhost podman[309940]: 2025-11-23 09:59:04.372437489 +0000 UTC m=+0.272063852 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:04 localhost podman[309939]: 2025-11-23 09:59:04.339861321 +0000 UTC m=+0.246352301 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Nov 23 04:59:04 localhost podman[309940]: 2025-11-23 09:59:04.414029403 +0000 UTC m=+0.313655706 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:04 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:59:04 localhost podman[309939]: 2025-11-23 09:59:04.469705542 +0000 UTC m=+0.376196522 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Nov 23 04:59:04 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:59:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e94 e94: 6 total, 6 up, 6 in
Nov 23 04:59:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:05 localhost systemd[1]: tmp-crun.EPw7qz.mount: Deactivated successfully.
Nov 23 04:59:05 localhost nova_compute[281613]: 2025-11-23 09:59:05.264 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:05 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e95 e95: 6 total, 6 up, 6 in
Nov 23 04:59:05 localhost nova_compute[281613]: 2025-11-23 09:59:05.814 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:06 localhost nova_compute[281613]: 2025-11-23 09:59:06.088 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:06 localhost ovn_controller[153786]: 2025-11-23T09:59:06Z|00065|binding|INFO|Releasing lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 from this chassis (sb_readonly=0)
Nov 23 04:59:06 localhost kernel: device tap5836fd2a-7b left promiscuous mode
Nov 23 04:59:06 localhost ovn_controller[153786]: 2025-11-23T09:59:06Z|00066|binding|INFO|Setting lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 down in Southbound
Nov 23 04:59:06 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:06.101 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1915d3e5d4254231a0517e2dcf35848f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11af2473-7670-43cb-8698-dcf3af8d28c8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5836fd2a-7ba0-417c-b0e1-91c14dd29120) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:06 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:06.103 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5836fd2a-7ba0-417c-b0e1-91c14dd29120 in datapath 4888f017-3f3f-45ef-b058-53b634233093 unbound from our chassis#033[00m
Nov 23 04:59:06 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:06.105 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4888f017-3f3f-45ef-b058-53b634233093, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:59:06 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:06.107 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec48ca3-97c1-4201-8837-8be90780d4b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:06 localhost nova_compute[281613]: 2025-11-23 09:59:06.112 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:07 localhost podman[310027]: 2025-11-23 09:59:07.10037222 +0000 UTC m=+0.060094530 container kill 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:59:07 localhost dnsmasq[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/addn_hosts - 0 addresses
Nov 23 04:59:07 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/host
Nov 23 04:59:07 localhost dnsmasq-dhcp[309109]: read /var/lib/neutron/dhcp/e40b78ba-3bb8-4706-86c2-b7af5d0d6c67/opts
Nov 23 04:59:07 localhost kernel: device tapd3fbb916-4d left promiscuous mode
Nov 23 04:59:07 localhost ovn_controller[153786]: 2025-11-23T09:59:07Z|00067|binding|INFO|Releasing lport d3fbb916-4dec-4a6b-8718-6271b2e70b14 from this chassis (sb_readonly=0)
Nov 23 04:59:07 localhost ovn_controller[153786]: 2025-11-23T09:59:07Z|00068|binding|INFO|Setting lport d3fbb916-4dec-4a6b-8718-6271b2e70b14 down in Southbound
Nov 23 04:59:07 localhost nova_compute[281613]: 2025-11-23 09:59:07.531 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:07 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:07.544 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57d9e088e75b4a3482d0e3a02bcce5be', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13c32b33-300d-4a8e-8e29-21d478bcccf9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=d3fbb916-4dec-4a6b-8718-6271b2e70b14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:07 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:07.546 159429 INFO neutron.agent.ovn.metadata.agent [-] Port d3fbb916-4dec-4a6b-8718-6271b2e70b14 in datapath e40b78ba-3bb8-4706-86c2-b7af5d0d6c67 unbound from our chassis#033[00m
Nov 23 04:59:07 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:07.549 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:59:07 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:07.550 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[3309025d-8214-4a17-8f2c-3dcf2c427d33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:07 localhost nova_compute[281613]: 2025-11-23 09:59:07.557 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:08 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:08.345 2 INFO neutron.agent.securitygroups_rpc [None req-0bae4724-8fa8-4216-8cb0-34bcdfbbc61a 4b677b000abe4b0687ff1afcd1016893 2a693c1f03094401b2a83bfa038e2d85 - - default default] Security group member updated ['e11e3507-78f9-4b55-80fe-2aa7bb5d486d']#033[00m
Nov 23 04:59:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e96 e96: 6 total, 6 up, 6 in
Nov 23 04:59:09 localhost dnsmasq[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:09 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:09 localhost dnsmasq-dhcp[262961]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:09 localhost podman[310070]: 2025-11-23 09:59:09.220496091 +0000 UTC m=+0.063553115 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:09 localhost systemd[1]: tmp-crun.ovdRzc.mount: Deactivated successfully.
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent [None req-18355b26-0b3a-43d9-9bd2-180316a61621 - - - - - -] Unable to reload_allocations dhcp for 4888f017-3f3f-45ef-b058-53b634233093.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5836fd2a-7b not found in namespace qdhcp-4888f017-3f3f-45ef-b058-53b634233093.
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     return fut.result()
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     raise self._exception
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5836fd2a-7b not found in namespace qdhcp-4888f017-3f3f-45ef-b058-53b634233093.
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.248 262721 ERROR neutron.agent.dhcp.agent #033[00m
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.256 262721 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.267 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.267 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.268 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.317 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.473 262721 INFO neutron.agent.dhcp.agent [None req-c53212fc-0a02-4eb3-8d03-6048a7ead8f7 - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.474 262721 INFO neutron.agent.dhcp.agent [-] Starting network 4888f017-3f3f-45ef-b058-53b634233093 dhcp configuration#033[00m
Nov 23 04:59:09 localhost dnsmasq[262961]: exiting on receipt of SIGTERM
Nov 23 04:59:09 localhost podman[310101]: 2025-11-23 09:59:09.66919181 +0000 UTC m=+0.067375789 container kill f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:59:09 localhost systemd[1]: libpod-f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6.scope: Deactivated successfully.
Nov 23 04:59:09 localhost podman[310116]: 2025-11-23 09:59:09.740496584 +0000 UTC m=+0.057786237 container died f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:59:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e97 e97: 6 total, 6 up, 6 in
Nov 23 04:59:09 localhost podman[310116]: 2025-11-23 09:59:09.774980565 +0000 UTC m=+0.092270168 container cleanup f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 04:59:09 localhost systemd[1]: libpod-conmon-f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6.scope: Deactivated successfully.
Nov 23 04:59:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:09 localhost podman[310118]: 2025-11-23 09:59:09.832284858 +0000 UTC m=+0.139901217 container remove f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:59:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:09.884 262721 INFO neutron.agent.linux.ip_lib [-] Device tap5836fd2a-7b cannot be used as it has no MAC address#033[00m
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.906 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:09 localhost kernel: device tap5836fd2a-7b entered promiscuous mode
Nov 23 04:59:09 localhost NetworkManager[5990]: <info>  [1763891949.9203] manager: (tap5836fd2a-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/19)
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.920 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:09 localhost ovn_controller[153786]: 2025-11-23T09:59:09Z|00069|binding|INFO|Claiming lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 for this chassis.
Nov 23 04:59:09 localhost ovn_controller[153786]: 2025-11-23T09:59:09Z|00070|binding|INFO|5836fd2a-7ba0-417c-b0e1-91c14dd29120: Claiming unknown
Nov 23 04:59:09 localhost systemd-udevd[310150]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:59:09 localhost ovn_controller[153786]: 2025-11-23T09:59:09Z|00071|binding|INFO|Setting lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 ovn-installed in OVS
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.930 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4888f017-3f3f-45ef-b058-53b634233093', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1915d3e5d4254231a0517e2dcf35848f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11af2473-7670-43cb-8698-dcf3af8d28c8, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5836fd2a-7ba0-417c-b0e1-91c14dd29120) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:09 localhost ovn_controller[153786]: 2025-11-23T09:59:09Z|00072|binding|INFO|Setting lport 5836fd2a-7ba0-417c-b0e1-91c14dd29120 up in Southbound
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.932 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5836fd2a-7ba0-417c-b0e1-91c14dd29120 in datapath 4888f017-3f3f-45ef-b058-53b634233093 bound to our chassis#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.935 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9f87db19-0599-462f-b8c0-280fa85e1e72 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.935 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4888f017-3f3f-45ef-b058-53b634233093, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.934 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.937 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:09 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:09.936 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[2543448d-b13f-49e4-bdc7-73ebe7f1c109]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:09 localhost nova_compute[281613]: 2025-11-23 09:59:09.972 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:10 localhost nova_compute[281613]: 2025-11-23 09:59:10.010 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:10 localhost nova_compute[281613]: 2025-11-23 09:59:10.045 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.202 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.202 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 09:59:10.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay-1aaedf84d18e8f5cac94b0f6d56ff43a8182966c832c7206a8b41fc2cc58f4d1-merged.mount: Deactivated successfully.
Nov 23 04:59:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7a1aaed213958c8edb79c246ed623006c7c53892c0153f1b6428e9309f860b6-userdata-shm.mount: Deactivated successfully.
Nov 23 04:59:10 localhost nova_compute[281613]: 2025-11-23 09:59:10.268 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e98 e98: 6 total, 6 up, 6 in
Nov 23 04:59:10 localhost nova_compute[281613]: 2025-11-23 09:59:10.817 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:10 localhost podman[310203]: 
Nov 23 04:59:10 localhost podman[310203]: 2025-11-23 09:59:10.882285539 +0000 UTC m=+0.082346586 container create f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:59:10 localhost systemd[1]: Started libpod-conmon-f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d.scope.
Nov 23 04:59:10 localhost systemd[1]: Started libcrun container.
Nov 23 04:59:10 localhost podman[310203]: 2025-11-23 09:59:10.845180087 +0000 UTC m=+0.045241154 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:59:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ef5e8ba18b7b3e6372c81ec3a4ea746bfd108dc369d795c8aee51f7f544bf57/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:59:10 localhost podman[310203]: 2025-11-23 09:59:10.960740519 +0000 UTC m=+0.160801556 container init f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:10 localhost podman[310203]: 2025-11-23 09:59:10.969888359 +0000 UTC m=+0.169949396 container start f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 23 04:59:10 localhost dnsmasq[310221]: started, version 2.85 cachesize 150
Nov 23 04:59:10 localhost dnsmasq[310221]: DNS service limited to local subnets
Nov 23 04:59:10 localhost dnsmasq[310221]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:59:10 localhost dnsmasq[310221]: warning: no upstream servers configured
Nov 23 04:59:10 localhost dnsmasq-dhcp[310221]: DHCP, static leases only on 192.168.122.0, lease time 1d
Nov 23 04:59:10 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:10 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:10 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:11 localhost nova_compute[281613]: 2025-11-23 09:59:11.030 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:11.039 262721 INFO neutron.agent.dhcp.agent [None req-9a9c8e72-1d90-4e21-90fb-fae36c52ea70 - - - - - -] Finished network 4888f017-3f3f-45ef-b058-53b634233093 dhcp configuration#033[00m
Nov 23 04:59:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:11.041 262721 INFO neutron.agent.dhcp.agent [None req-c53212fc-0a02-4eb3-8d03-6048a7ead8f7 - - - - - -] Synchronizing state complete#033[00m
Nov 23 04:59:11 localhost podman[240144]: time="2025-11-23T09:59:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:59:11 localhost podman[240144]: @ - - [23/Nov/2025:09:59:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158150 "" "Go-http-client/1.1"
Nov 23 04:59:11 localhost dnsmasq[309109]: exiting on receipt of SIGTERM
Nov 23 04:59:11 localhost podman[310238]: 2025-11-23 09:59:11.361746728 +0000 UTC m=+0.118546175 container kill 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 04:59:11 localhost systemd[1]: libpod-30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e.scope: Deactivated successfully.
Nov 23 04:59:11 localhost podman[240144]: @ - - [23/Nov/2025:09:59:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19673 "" "Go-http-client/1.1"
Nov 23 04:59:11 localhost podman[310251]: 2025-11-23 09:59:11.440906987 +0000 UTC m=+0.060563223 container died 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:59:11 localhost podman[310251]: 2025-11-23 09:59:11.482953084 +0000 UTC m=+0.102609280 container cleanup 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:59:11 localhost systemd[1]: libpod-conmon-30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e.scope: Deactivated successfully.
Nov 23 04:59:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:11.501 262721 INFO neutron.agent.dhcp.agent [None req-47c2388f-66aa-4a06-973c-085f0b121b19 - - - - - -] DHCP configuration for ports {'e0005f93-e3d4-4607-a8bd-d715c3013354', 'cb90d712-4442-4d65-b8ed-0e95bb9a7fdd', '5836fd2a-7ba0-417c-b0e1-91c14dd29120', 'f9b93e75-681a-4b9c-8a78-b51e0259bc38', '91a1820f-c921-4d0a-bc88-eddca7bcadfc', '796046a4-2720-44da-bd08-f50f7bf76530'} is completed#033[00m
Nov 23 04:59:11 localhost podman[310252]: 2025-11-23 09:59:11.552641685 +0000 UTC m=+0.166554374 container remove 30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e40b78ba-3bb8-4706-86c2-b7af5d0d6c67, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:11.590 262721 INFO neutron.agent.dhcp.agent [None req-52ce31d4-6e86-4772-a051-bf060eaecb52 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:59:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:11.591 262721 INFO neutron.agent.dhcp.agent [None req-52ce31d4-6e86-4772-a051-bf060eaecb52 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:59:12 localhost nova_compute[281613]: 2025-11-23 09:59:12.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:12 localhost nova_compute[281613]: 2025-11-23 09:59:12.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:12 localhost nova_compute[281613]: 2025-11-23 09:59:12.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 04:59:12 localhost systemd[1]: var-lib-containers-storage-overlay-1fbe7e9ea349ed9312c6cf0466b27178d4662ccb430336d77979dd8507ad40e0-merged.mount: Deactivated successfully.
Nov 23 04:59:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30913a2ec2eb8d2062c1fc13c67de2a243a93f24cb83ad6ab7310c9721e50e9e-userdata-shm.mount: Deactivated successfully.
Nov 23 04:59:12 localhost systemd[1]: run-netns-qdhcp\x2de40b78ba\x2d3bb8\x2d4706\x2d86c2\x2db7af5d0d6c67.mount: Deactivated successfully.
Nov 23 04:59:12 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:12.389 2 INFO neutron.agent.securitygroups_rpc [req-a16da276-a12d-4c8d-9117-64c33f913ca9 req-9efe9caa-9e38-4b50-8b4e-539fa928addc 492e2909a77a4032ab6c29a26d12fb14 0497de4959b2494e8036eb39226430d6 - - default default] Security group member updated ['2da1104f-77c5-475e-b21f-e52710edc8b5']#033[00m
Nov 23 04:59:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e99 e99: 6 total, 6 up, 6 in
Nov 23 04:59:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 04:59:13 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2564275816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 04:59:13 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:13.809 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:13Z, description=, device_id=9598f82f-5487-414c-b61d-d64ce4fc0187, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c609d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c60f10>], id=c318297f-4fe2-4673-8a1d-99a8266e473b, ip_allocation=immediate, mac_address=fa:16:3e:58:98:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=877, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:13Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.015 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.035 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.036 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.036 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.059 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 04:59:14 localhost systemd[1]: tmp-crun.SxSYFM.mount: Deactivated successfully.
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.061 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.062 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.062 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:14 localhost podman[310295]: 2025-11-23 09:59:14.066728865 +0000 UTC m=+0.083702076 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:59:14 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.087 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.088 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.088 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.088 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.089 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:59:14 localhost systemd[1]: tmp-crun.s9EPWo.mount: Deactivated successfully.
Nov 23 04:59:14 localhost podman[310309]: 2025-11-23 09:59:14.188611656 +0000 UTC m=+0.094638836 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 04:59:14 localhost podman[310310]: 2025-11-23 09:59:14.239905513 +0000 UTC m=+0.145777488 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 04:59:14 localhost podman[310310]: 2025-11-23 09:59:14.251920862 +0000 UTC m=+0.157792877 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 04:59:14 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:59:14 localhost podman[310311]: 2025-11-23 09:59:14.305808659 +0000 UTC m=+0.202221825 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 04:59:14 localhost podman[310309]: 2025-11-23 09:59:14.363877302 +0000 UTC m=+0.269904442 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:14 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:14.371 262721 INFO neutron.agent.dhcp.agent [None req-175e0ec2-3100-4c72-9d73-0b2767369c62 - - - - - -] DHCP configuration for ports {'c318297f-4fe2-4673-8a1d-99a8266e473b'} is completed#033[00m
Nov 23 04:59:14 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:59:14 localhost podman[310311]: 2025-11-23 09:59:14.411911169 +0000 UTC m=+0.308324315 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:59:14 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:59:14 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:14 localhost podman[310425]: 2025-11-23 09:59:14.482503144 +0000 UTC m=+0.049693804 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 04:59:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:14 localhost podman[310416]: 2025-11-23 09:59:14.489066134 +0000 UTC m=+0.086551824 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:14 localhost podman[310416]: 2025-11-23 09:59:14.501019502 +0000 UTC m=+0.098505202 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 04:59:14 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:59:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:59:14 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1089133868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:59:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e100 e100: 6 total, 6 up, 6 in
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.534 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.711 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.743 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.745 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11688MB free_disk=41.70030212402344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.746 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.746 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.801 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.802 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 04:59:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:14 localhost nova_compute[281613]: 2025-11-23 09:59:14.825 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.280 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 04:59:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/929396430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.337 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.344 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.358 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.360 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.361 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 04:59:15 localhost nova_compute[281613]: 2025-11-23 09:59:15.818 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.038 262721 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmp5lzblewe/privsep.sock']#033[00m
Nov 23 04:59:16 localhost nova_compute[281613]: 2025-11-23 09:59:16.235 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:16 localhost nova_compute[281613]: 2025-11-23 09:59:16.317 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:16 localhost nova_compute[281613]: 2025-11-23 09:59:16.319 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.668 262721 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.551 310481 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.558 310481 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.561 310481 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 23 04:59:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:16.562 310481 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310481#033[00m
Nov 23 04:59:16 localhost dnsmasq-dhcp[310221]: DHCPRELEASE(tap5836fd2a-7b) 192.168.122.199 fa:16:3e:04:77:45
Nov 23 04:59:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e101 e101: 6 total, 6 up, 6 in
Nov 23 04:59:17 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:17 localhost systemd[1]: tmp-crun.5FkJjY.mount: Deactivated successfully.
Nov 23 04:59:17 localhost podman[310503]: 2025-11-23 09:59:17.488843363 +0000 UTC m=+0.077242248 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:59:18 localhost nova_compute[281613]: 2025-11-23 09:59:18.807 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:20 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:20.220 2 INFO neutron.agent.securitygroups_rpc [req-e10db1c6-11f3-4ff7-8a20-47058bab960f req-be246029-4620-443a-8a27-dc66d74bf8a5 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['df6d8f7b-74cc-4864-a7e2-24c32662f7e1']#033[00m
Nov 23 04:59:20 localhost nova_compute[281613]: 2025-11-23 09:59:20.274 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:20 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:20.714 2 INFO neutron.agent.securitygroups_rpc [req-b63466c9-444c-4747-806e-6e70f6ca8dbf req-1b97f9eb-afe9-470d-a373-457d04103769 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['486481c0-58d7-474c-ac28-9109e6d75e3e']#033[00m
Nov 23 04:59:20 localhost nova_compute[281613]: 2025-11-23 09:59:20.820 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:21 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:21.658 2 INFO neutron.agent.securitygroups_rpc [req-5b5e1aae-0130-44ef-b3c8-2b4a33b1f155 req-73054398-e6b7-4548-a319-a659c6c54985 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['9f0e447c-560b-475e-bb8e-29f8dd459211']#033[00m
Nov 23 04:59:22 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:22.053 262721 INFO neutron.agent.linux.ip_lib [None req-992250e8-0235-474c-a9bf-5fd86bf814f5 - - - - - -] Device tap6828152d-92 cannot be used as it has no MAC address#033[00m
Nov 23 04:59:22 localhost nova_compute[281613]: 2025-11-23 09:59:22.116 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:22 localhost kernel: device tap6828152d-92 entered promiscuous mode
Nov 23 04:59:22 localhost ovn_controller[153786]: 2025-11-23T09:59:22Z|00073|binding|INFO|Claiming lport 6828152d-92bd-408b-8d2a-24a9e4ed3cea for this chassis.
Nov 23 04:59:22 localhost NetworkManager[5990]: <info>  [1763891962.1276] manager: (tap6828152d-92): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 23 04:59:22 localhost nova_compute[281613]: 2025-11-23 09:59:22.127 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:22 localhost ovn_controller[153786]: 2025-11-23T09:59:22Z|00074|binding|INFO|6828152d-92bd-408b-8d2a-24a9e4ed3cea: Claiming unknown
Nov 23 04:59:22 localhost systemd-udevd[310535]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:59:22 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:22.141 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-3575213f-e6d7-487b-bb57-318a295dd105', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3575213f-e6d7-487b-bb57-318a295dd105', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d0c7179-faf7-4e4f-ae4a-4d85903602a4, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=6828152d-92bd-408b-8d2a-24a9e4ed3cea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:22 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:22.143 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 6828152d-92bd-408b-8d2a-24a9e4ed3cea in datapath 3575213f-e6d7-487b-bb57-318a295dd105 bound to our chassis#033[00m
Nov 23 04:59:22 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:22.145 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4caa437e-5c19-4ecd-a7a9-f93e6fcbbdf3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 04:59:22 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:22.146 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3575213f-e6d7-487b-bb57-318a295dd105, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:59:22 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:22.146 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6dac59-0dee-47c0-b821-8a7ce98f7332]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:22 localhost journal[229736]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 23 04:59:22 localhost journal[229736]: hostname: np0005532586.localdomain
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost ovn_controller[153786]: 2025-11-23T09:59:22Z|00075|binding|INFO|Setting lport 6828152d-92bd-408b-8d2a-24a9e4ed3cea ovn-installed in OVS
Nov 23 04:59:22 localhost ovn_controller[153786]: 2025-11-23T09:59:22Z|00076|binding|INFO|Setting lport 6828152d-92bd-408b-8d2a-24a9e4ed3cea up in Southbound
Nov 23 04:59:22 localhost nova_compute[281613]: 2025-11-23 09:59:22.171 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost journal[229736]: ethtool ioctl error on tap6828152d-92: No such device
Nov 23 04:59:22 localhost nova_compute[281613]: 2025-11-23 09:59:22.209 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:22 localhost nova_compute[281613]: 2025-11-23 09:59:22.237 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: ERROR   09:59:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: ERROR   09:59:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: ERROR   09:59:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: ERROR   09:59:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: ERROR   09:59:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:59:22 localhost openstack_network_exporter[242118]: 
Nov 23 04:59:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e102 e102: 6 total, 6 up, 6 in
Nov 23 04:59:22 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:22.688 2 INFO neutron.agent.securitygroups_rpc [req-2f39ed52-ad77-4aa6-9471-651d11ecbf13 req-f42d6273-96eb-4a84-b6a8-20685191fd4a 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['9d3d4eb8-5be7-4867-b930-e62b16d22d58']#033[00m
Nov 23 04:59:23 localhost podman[310606]: 
Nov 23 04:59:23 localhost podman[310606]: 2025-11-23 09:59:23.070570993 +0000 UTC m=+0.087088019 container create f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 04:59:23 localhost systemd[1]: Started libpod-conmon-f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158.scope.
Nov 23 04:59:23 localhost podman[310606]: 2025-11-23 09:59:23.028620322 +0000 UTC m=+0.045137378 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:59:23 localhost systemd[1]: Started libcrun container.
Nov 23 04:59:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1986855ef55539a47081e562617e45a725267d0cd84e674b0e536201d98ff3fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:59:23 localhost podman[310606]: 2025-11-23 09:59:23.148012385 +0000 UTC m=+0.164529421 container init f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 04:59:23 localhost podman[310606]: 2025-11-23 09:59:23.165637359 +0000 UTC m=+0.182154385 container start f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 04:59:23 localhost dnsmasq[310625]: started, version 2.85 cachesize 150
Nov 23 04:59:23 localhost dnsmasq[310625]: DNS service limited to local subnets
Nov 23 04:59:23 localhost dnsmasq[310625]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:59:23 localhost dnsmasq[310625]: warning: no upstream servers configured
Nov 23 04:59:23 localhost dnsmasq-dhcp[310625]: DHCP, static leases only on 10.101.0.0, lease time 1d
Nov 23 04:59:23 localhost dnsmasq[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/addn_hosts - 0 addresses
Nov 23 04:59:23 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/host
Nov 23 04:59:23 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/opts
Nov 23 04:59:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:23.292 262721 INFO neutron.agent.dhcp.agent [None req-56f82aa5-1972-46d3-bbae-d38ac83123b6 - - - - - -] DHCP configuration for ports {'01131251-5789-4b50-9364-a5b113686372'} is completed#033[00m
Nov 23 04:59:23 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:23.521 2 INFO neutron.agent.securitygroups_rpc [req-9fd26688-f794-4469-9fd8-a5b40d60592d req-5ebc8662-650f-469d-8c45-5ce5c30495b8 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m
Nov 23 04:59:23 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:23.987 2 INFO neutron.agent.securitygroups_rpc [req-e3ac8e09-5876-4e41-80c7-46043b4c6329 req-ca05e120-41d0-4e85-be51-0d5858a51936 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m
Nov 23 04:59:24 localhost systemd[1]: tmp-crun.yMR7Zn.mount: Deactivated successfully.
Nov 23 04:59:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:24.363 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:24Z, description=, device_id=89783321-05cb-41ec-bfca-a08a32ddb0e0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790db2280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790db2670>], id=b83a6b90-00b7-41ef-a6c9-8e30e4359443, ip_allocation=immediate, mac_address=fa:16:3e:ee:40:cc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:18Z, description=, dns_domain=, id=3575213f-e6d7-487b-bb57-318a295dd105, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-223881741, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9565, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=907, status=ACTIVE, subnets=['bf40141c-2861-4fcd-a67c-f42a0d372a72'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:20Z, vlan_transparent=None, network_id=3575213f-e6d7-487b-bb57-318a295dd105, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=984, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:24Z on network 3575213f-e6d7-487b-bb57-318a295dd105#033[00m
Nov 23 04:59:24 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:24.395 2 INFO neutron.agent.securitygroups_rpc [None req-55c9cfb2-f59a-42d6-ace6-61788e22f102 f30cb7ce3bac485ca16e284ef2514162 493833d8fb394637b29c3fb2052aca9c - - default default] Security group member updated ['6a5ca8fc-febe-492b-8ed6-1c2faceb11b7']#033[00m
Nov 23 04:59:24 localhost dnsmasq[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/addn_hosts - 1 addresses
Nov 23 04:59:24 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/host
Nov 23 04:59:24 localhost podman[310643]: 2025-11-23 09:59:24.586935378 +0000 UTC m=+0.064266673 container kill f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:24 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/opts
Nov 23 04:59:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:24.858 262721 INFO neutron.agent.dhcp.agent [None req-9c56ed28-f46b-41ae-a58f-d133f6871804 - - - - - -] DHCP configuration for ports {'b83a6b90-00b7-41ef-a6c9-8e30e4359443'} is completed#033[00m
Nov 23 04:59:24 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:24.897 2 INFO neutron.agent.securitygroups_rpc [None req-dfbbbdd6-9764-4926-ad6f-603dbba55323 f30cb7ce3bac485ca16e284ef2514162 493833d8fb394637b29c3fb2052aca9c - - default default] Security group member updated ['6a5ca8fc-febe-492b-8ed6-1c2faceb11b7']#033[00m
Nov 23 04:59:24 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:24.919 2 INFO neutron.agent.securitygroups_rpc [req-3634fbe3-812b-4789-a7d3-12a0d9366017 req-05979dae-61e1-4fc8-b138-901b668995d3 924b04d509744aa59ab90825723761be c1b218a298814f81811d06e4ddeeca2f - - default default] Security group rule updated ['b40df903-b9f3-4a1c-8419-71383dae71f9']#033[00m
Nov 23 04:59:25 localhost nova_compute[281613]: 2025-11-23 09:59:25.277 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:25.538 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:25Z, description=, device_id=e4e2758a-424c-4f93-94ee-a28fa7a5aa9f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cec850>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cec7f0>], id=0be491f7-5eb8-483f-99e5-3094672cfe0e, ip_allocation=immediate, mac_address=fa:16:3e:c2:78:99, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=993, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:25Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:25 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:25 localhost podman[310680]: 2025-11-23 09:59:25.742642806 +0000 UTC m=+0.057050966 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 23 04:59:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:25.780 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:24Z, description=, device_id=89783321-05cb-41ec-bfca-a08a32ddb0e0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c5f940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c5ffd0>], id=b83a6b90-00b7-41ef-a6c9-8e30e4359443, ip_allocation=immediate, mac_address=fa:16:3e:ee:40:cc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:18Z, description=, dns_domain=, id=3575213f-e6d7-487b-bb57-318a295dd105, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-223881741, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9565, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=907, status=ACTIVE, subnets=['bf40141c-2861-4fcd-a67c-f42a0d372a72'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:20Z, vlan_transparent=None, network_id=3575213f-e6d7-487b-bb57-318a295dd105, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=984, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:24Z on network 3575213f-e6d7-487b-bb57-318a295dd105#033[00m
Nov 23 04:59:25 localhost nova_compute[281613]: 2025-11-23 09:59:25.821 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:25 localhost dnsmasq[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/addn_hosts - 1 addresses
Nov 23 04:59:25 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/host
Nov 23 04:59:25 localhost podman[310718]: 2025-11-23 09:59:25.994588724 +0000 UTC m=+0.061702033 container kill f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:25 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/opts
Nov 23 04:59:26 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:26.019 262721 INFO neutron.agent.dhcp.agent [None req-c76961d1-f639-4b28-87c8-9a25bc4ea7fe - - - - - -] DHCP configuration for ports {'0be491f7-5eb8-483f-99e5-3094672cfe0e'} is completed#033[00m
Nov 23 04:59:26 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:26.156 262721 INFO neutron.agent.dhcp.agent [None req-05be9d64-d437-4dcd-a3b2-3d12b8dc188b - - - - - -] DHCP configuration for ports {'b83a6b90-00b7-41ef-a6c9-8e30e4359443'} is completed#033[00m
Nov 23 04:59:26 localhost nova_compute[281613]: 2025-11-23 09:59:26.398 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:27 localhost dnsmasq[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/addn_hosts - 0 addresses
Nov 23 04:59:27 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/host
Nov 23 04:59:27 localhost dnsmasq-dhcp[310625]: read /var/lib/neutron/dhcp/3575213f-e6d7-487b-bb57-318a295dd105/opts
Nov 23 04:59:27 localhost podman[310756]: 2025-11-23 09:59:27.399931315 +0000 UTC m=+0.064561171 container kill f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 04:59:28 localhost ovn_controller[153786]: 2025-11-23T09:59:28Z|00077|binding|INFO|Releasing lport 6828152d-92bd-408b-8d2a-24a9e4ed3cea from this chassis (sb_readonly=0)
Nov 23 04:59:28 localhost nova_compute[281613]: 2025-11-23 09:59:28.267 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:28 localhost ovn_controller[153786]: 2025-11-23T09:59:28Z|00078|binding|INFO|Setting lport 6828152d-92bd-408b-8d2a-24a9e4ed3cea down in Southbound
Nov 23 04:59:28 localhost kernel: device tap6828152d-92 left promiscuous mode
Nov 23 04:59:28 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:28.283 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-3575213f-e6d7-487b-bb57-318a295dd105', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3575213f-e6d7-487b-bb57-318a295dd105', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d0c7179-faf7-4e4f-ae4a-4d85903602a4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=6828152d-92bd-408b-8d2a-24a9e4ed3cea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:28 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:28.285 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 6828152d-92bd-408b-8d2a-24a9e4ed3cea in datapath 3575213f-e6d7-487b-bb57-318a295dd105 unbound from our chassis#033[00m
Nov 23 04:59:28 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:28.287 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3575213f-e6d7-487b-bb57-318a295dd105, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 04:59:28 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:28.288 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ada3625b-f8a0-4a44-9f43-72809b1cc711]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:28 localhost nova_compute[281613]: 2025-11-23 09:59:28.298 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:30 localhost nova_compute[281613]: 2025-11-23 09:59:30.280 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e103 e103: 6 total, 6 up, 6 in
Nov 23 04:59:30 localhost nova_compute[281613]: 2025-11-23 09:59:30.794 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:30 localhost nova_compute[281613]: 2025-11-23 09:59:30.823 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:31 localhost dnsmasq[310625]: exiting on receipt of SIGTERM
Nov 23 04:59:31 localhost podman[310795]: 2025-11-23 09:59:31.387700972 +0000 UTC m=+0.082791161 container kill f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 04:59:31 localhost systemd[1]: libpod-f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158.scope: Deactivated successfully.
Nov 23 04:59:31 localhost podman[310812]: 2025-11-23 09:59:31.467110169 +0000 UTC m=+0.057235160 container died f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 04:59:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158-userdata-shm.mount: Deactivated successfully.
Nov 23 04:59:31 localhost systemd[1]: var-lib-containers-storage-overlay-1986855ef55539a47081e562617e45a725267d0cd84e674b0e536201d98ff3fd-merged.mount: Deactivated successfully.
Nov 23 04:59:31 localhost podman[310812]: 2025-11-23 09:59:31.568250562 +0000 UTC m=+0.158375553 container remove f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3575213f-e6d7-487b-bb57-318a295dd105, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 04:59:31 localhost systemd[1]: libpod-conmon-f6522596d2d22721e50cdcf893a7ebcb74235f8091d7419b9539b35f8c5e2158.scope: Deactivated successfully.
Nov 23 04:59:31 localhost systemd[1]: run-netns-qdhcp\x2d3575213f\x2de6d7\x2d487b\x2dbb57\x2d318a295dd105.mount: Deactivated successfully.
Nov 23 04:59:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:31.782 262721 INFO neutron.agent.dhcp.agent [None req-8ac1840f-ccac-4df4-988c-27ea2eac7ffe - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:59:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:31.809 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:59:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:31.991 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 04:59:32 localhost nova_compute[281613]: 2025-11-23 09:59:32.223 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e104 e104: 6 total, 6 up, 6 in
Nov 23 04:59:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e105 e105: 6 total, 6 up, 6 in
Nov 23 04:59:33 localhost podman[310853]: 2025-11-23 09:59:33.991963365 +0000 UTC m=+0.071257644 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:59:33 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:34 localhost nova_compute[281613]: 2025-11-23 09:59:34.240 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 04:59:34 localhost podman[310893]: 2025-11-23 09:59:34.660483745 +0000 UTC m=+0.103263352 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Nov 23 04:59:34 localhost podman[310893]: 2025-11-23 09:59:34.705607592 +0000 UTC m=+0.148387229 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 04:59:34 localhost podman[310894]: 2025-11-23 09:59:34.709748436 +0000 UTC m=+0.149528961 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 04:59:34 localhost podman[310894]: 2025-11-23 09:59:34.726253848 +0000 UTC m=+0.166034373 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true)
Nov 23 04:59:34 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 04:59:34 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 04:59:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:34 localhost podman[310895]: 2025-11-23 09:59:34.827915896 +0000 UTC m=+0.264714539 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 04:59:34 localhost podman[310895]: 2025-11-23 09:59:34.840816679 +0000 UTC m=+0.277615342 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:59:34 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 04:59:34 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:34.977 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:34Z, description=, device_id=8b7ebebb-7573-410b-8139-7228417baae6, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d163a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d16040>], id=28a31b7e-fdfd-482b-8b55-6a855c124c1e, ip_allocation=immediate, mac_address=fa:16:3e:be:f6:83, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1033, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:34Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:34 localhost systemd[1]: tmp-crun.r62aKR.mount: Deactivated successfully.
Nov 23 04:59:35 localhost podman[311008]: 2025-11-23 09:59:35.180516134 +0000 UTC m=+0.045818588 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 04:59:35 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:35 localhost nova_compute[281613]: 2025-11-23 09:59:35.283 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:35 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 04:59:35 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:59:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e106 e106: 6 total, 6 up, 6 in
Nov 23 04:59:35 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:35.620 262721 INFO neutron.agent.dhcp.agent [None req-0abae6c2-6ac2-43f7-a1a9-e80040c84e92 - - - - - -] DHCP configuration for ports {'28a31b7e-fdfd-482b-8b55-6a855c124c1e'} is completed#033[00m
Nov 23 04:59:35 localhost nova_compute[281613]: 2025-11-23 09:59:35.825 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:35 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:35 localhost podman[311075]: 2025-11-23 09:59:35.965669391 +0000 UTC m=+0.063739939 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:59:36 localhost nova_compute[281613]: 2025-11-23 09:59:36.147 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e107 e107: 6 total, 6 up, 6 in
Nov 23 04:59:37 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 04:59:37 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:37 localhost podman[311112]: 2025-11-23 09:59:37.678322258 +0000 UTC m=+0.064326494 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:37 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:38 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:38.228 2 INFO neutron.agent.securitygroups_rpc [None req-0fc01b46-65ca-4975-99ff-e6e4d0974af8 32512604c08f4fa48e6e985a3f6cd6d1 79509bc833494f3598e01347dc55dea9 - - default default] Security group member updated ['cfab2162-6afe-48a0-9f05-cee7f160244c']#033[00m
Nov 23 04:59:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 04:59:39 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:39.545 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:39Z, description=, device_id=d0d1ca2e-bec6-42f7-a399-c878f2411be2, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ccabb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ccad30>], id=bd7c843a-4d88-4cf2-910b-851f2078498b, ip_allocation=immediate, mac_address=fa:16:3e:6c:68:f0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1047, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:39Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:39 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:39 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:39 localhost podman[311149]: 2025-11-23 09:59:39.805653136 +0000 UTC m=+0.068697985 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:59:39 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:40 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:40.105 262721 INFO neutron.agent.dhcp.agent [None req-bb253238-ea98-42bb-bd8f-a389f087d1e0 - - - - - -] DHCP configuration for ports {'bd7c843a-4d88-4cf2-910b-851f2078498b'} is completed#033[00m
Nov 23 04:59:40 localhost nova_compute[281613]: 2025-11-23 09:59:40.287 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:40 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e108 e108: 6 total, 6 up, 6 in
Nov 23 04:59:40 localhost nova_compute[281613]: 2025-11-23 09:59:40.827 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:40 localhost nova_compute[281613]: 2025-11-23 09:59:40.835 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:41 localhost podman[240144]: time="2025-11-23T09:59:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 04:59:41 localhost podman[240144]: @ - - [23/Nov/2025:09:59:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 04:59:41 localhost podman[240144]: @ - - [23/Nov/2025:09:59:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19209 "" "Go-http-client/1.1"
Nov 23 04:59:41 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e109 e109: 6 total, 6 up, 6 in
Nov 23 04:59:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:42.408 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:42Z, description=, device_id=35addf56-8278-4518-bf70-85a5a42aafc8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790db2f40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790db2c70>], id=62202995-58e7-4fac-9030-5f507ec093c1, ip_allocation=immediate, mac_address=fa:16:3e:d9:96:dc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1075, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:42Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:42 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e110 e110: 6 total, 6 up, 6 in
Nov 23 04:59:42 localhost podman[311188]: 2025-11-23 09:59:42.664165921 +0000 UTC m=+0.059116052 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:42 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:42.989 262721 INFO neutron.agent.dhcp.agent [None req-b132a1f7-8505-4773-aab5-de85641bf319 - - - - - -] DHCP configuration for ports {'62202995-58e7-4fac-9030-5f507ec093c1'} is completed#033[00m
Nov 23 04:59:43 localhost nova_compute[281613]: 2025-11-23 09:59:43.301 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:43 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:43.329 2 INFO neutron.agent.securitygroups_rpc [None req-6713e12c-736e-4c48-95b8-a64782f68ffc 32512604c08f4fa48e6e985a3f6cd6d1 79509bc833494f3598e01347dc55dea9 - - default default] Security group member updated ['cfab2162-6afe-48a0-9f05-cee7f160244c']#033[00m
Nov 23 04:59:43 localhost nova_compute[281613]: 2025-11-23 09:59:43.509 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:43 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:43.510 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:43 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:43.513 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 04:59:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e111 e111: 6 total, 6 up, 6 in
Nov 23 04:59:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:43.864 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:43Z, description=, device_id=41b7469d-e467-4911-9b99-bab5a0773a8f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf7c40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf7eb0>], id=a760ef2b-8289-493d-84ca-373afddb0990, ip_allocation=immediate, mac_address=fa:16:3e:ac:d8:2c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1084, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:43Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:44 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:59:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:44 localhost podman[311226]: 2025-11-23 09:59:44.114955409 +0000 UTC m=+0.077488136 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:59:44 localhost systemd[1]: tmp-crun.tqA9H0.mount: Deactivated successfully.
Nov 23 04:59:44 localhost nova_compute[281613]: 2025-11-23 09:59:44.439 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:44.509 262721 INFO neutron.agent.dhcp.agent [None req-f4cda418-5edf-415e-989f-12def3661cf2 - - - - - -] DHCP configuration for ports {'a760ef2b-8289-493d-84ca-373afddb0990'} is completed#033[00m
Nov 23 04:59:44 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:44 localhost podman[311263]: 2025-11-23 09:59:44.56468353 +0000 UTC m=+0.070399491 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:59:44 localhost systemd[1]: tmp-crun.NPr8sK.mount: Deactivated successfully.
Nov 23 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 04:59:44 localhost podman[311280]: 2025-11-23 09:59:44.706896789 +0000 UTC m=+0.107114368 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 23 04:59:44 localhost podman[311282]: 2025-11-23 09:59:44.757350752 +0000 UTC m=+0.150079235 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:59:44 localhost podman[311280]: 2025-11-23 09:59:44.773172796 +0000 UTC m=+0.173390405 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 23 04:59:44 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 04:59:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:44 localhost podman[311279]: 2025-11-23 09:59:44.861847718 +0000 UTC m=+0.264112533 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:44 localhost podman[311279]: 2025-11-23 09:59:44.893205627 +0000 UTC m=+0.295470472 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 23 04:59:44 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 04:59:44 localhost podman[311281]: 2025-11-23 09:59:44.911126338 +0000 UTC m=+0.309114325 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:59:44 localhost podman[311282]: 2025-11-23 09:59:44.93306728 +0000 UTC m=+0.325795783 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 04:59:44 localhost podman[311281]: 2025-11-23 09:59:44.944674958 +0000 UTC m=+0.342662935 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 04:59:44 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 04:59:44 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 04:59:45 localhost systemd[1]: tmp-crun.HzFdgT.mount: Deactivated successfully.
Nov 23 04:59:45 localhost nova_compute[281613]: 2025-11-23 09:59:45.292 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:45 localhost nova_compute[281613]: 2025-11-23 09:59:45.571 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:45 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:45 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:45 localhost podman[311381]: 2025-11-23 09:59:45.656070013 +0000 UTC m=+0.066700110 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 23 04:59:45 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:45 localhost nova_compute[281613]: 2025-11-23 09:59:45.828 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:46 localhost nova_compute[281613]: 2025-11-23 09:59:46.512 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:47 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:47.120 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:46Z, description=, device_id=f8fc5e41-24a2-4f99-9d43-896ca326d7f5, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c6de50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c6dee0>], id=ccd3f82b-b987-414a-979c-3d00dd473ede, ip_allocation=immediate, mac_address=fa:16:3e:55:ed:04, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1106, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:46Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:47 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:47 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:47 localhost podman[311421]: 2025-11-23 09:59:47.367683502 +0000 UTC m=+0.063657866 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 23 04:59:47 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 e112: 6 total, 6 up, 6 in
Nov 23 04:59:47 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:47.683 262721 INFO neutron.agent.dhcp.agent [None req-0529b9d3-a9c4-4558-8cac-db2dddfed31d - - - - - -] DHCP configuration for ports {'ccd3f82b-b987-414a-979c-3d00dd473ede'} is completed#033[00m
Nov 23 04:59:49 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:49.568 2 INFO neutron.agent.securitygroups_rpc [None req-061cdcce-87b3-4fab-8b64-8613c3b5bd77 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:50.217 262721 INFO neutron.agent.linux.ip_lib [None req-752e76b1-9c06-4767-a361-2ab08fd915e4 - - - - - -] Device tapb5bf2b13-fe cannot be used as it has no MAC address#033[00m
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.238 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost kernel: device tapb5bf2b13-fe entered promiscuous mode
Nov 23 04:59:50 localhost NetworkManager[5990]: <info>  [1763891990.2490] manager: (tapb5bf2b13-fe): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.252 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost ovn_controller[153786]: 2025-11-23T09:59:50Z|00079|binding|INFO|Claiming lport b5bf2b13-fea2-4d80-bf73-ce0ead87c324 for this chassis.
Nov 23 04:59:50 localhost ovn_controller[153786]: 2025-11-23T09:59:50Z|00080|binding|INFO|b5bf2b13-fea2-4d80-bf73-ce0ead87c324: Claiming unknown
Nov 23 04:59:50 localhost systemd-udevd[311452]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 04:59:50 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:50.264 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-6f2b85f9-6289-4a2d-8bd5-880663b71bed', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f2b85f9-6289-4a2d-8bd5-880663b71bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58c64904-a60a-45d4-92a1-b6c0168fc728, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=b5bf2b13-fea2-4d80-bf73-ce0ead87c324) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 04:59:50 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:50.265 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b5bf2b13-fea2-4d80-bf73-ce0ead87c324 in datapath 6f2b85f9-6289-4a2d-8bd5-880663b71bed bound to our chassis#033[00m
Nov 23 04:59:50 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:50.267 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6f2b85f9-6289-4a2d-8bd5-880663b71bed or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 04:59:50 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:50.268 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7ad209-bb8e-4fd7-ace8-2220e23a46f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost ovn_controller[153786]: 2025-11-23T09:59:50Z|00081|binding|INFO|Setting lport b5bf2b13-fea2-4d80-bf73-ce0ead87c324 ovn-installed in OVS
Nov 23 04:59:50 localhost ovn_controller[153786]: 2025-11-23T09:59:50Z|00082|binding|INFO|Setting lport b5bf2b13-fea2-4d80-bf73-ce0ead87c324 up in Southbound
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.293 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.297 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost journal[229736]: ethtool ioctl error on tapb5bf2b13-fe: No such device
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.329 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.361 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:50 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:50.733 2 INFO neutron.agent.securitygroups_rpc [None req-5782673d-3ad2-4525-b0b7-33b67eb33956 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:50 localhost nova_compute[281613]: 2025-11-23 09:59:50.830 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:51 localhost podman[311523]: 
Nov 23 04:59:51 localhost podman[311523]: 2025-11-23 09:59:51.266195312 +0000 UTC m=+0.091945162 container create a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 04:59:51 localhost podman[311523]: 2025-11-23 09:59:51.221594489 +0000 UTC m=+0.047344369 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 04:59:51 localhost systemd[1]: Started libpod-conmon-a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c.scope.
Nov 23 04:59:51 localhost systemd[1]: Started libcrun container.
Nov 23 04:59:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2f6c1decc6b1c9c59582ef8a72c2c06716c87db97cfa1fbbd97f9b7f5673c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 04:59:51 localhost podman[311523]: 2025-11-23 09:59:51.358587935 +0000 UTC m=+0.184337785 container init a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:59:51 localhost podman[311523]: 2025-11-23 09:59:51.375620813 +0000 UTC m=+0.201370653 container start a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 04:59:51 localhost dnsmasq[311550]: started, version 2.85 cachesize 150
Nov 23 04:59:51 localhost dnsmasq[311550]: DNS service limited to local subnets
Nov 23 04:59:51 localhost dnsmasq[311550]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 04:59:51 localhost dnsmasq[311550]: warning: no upstream servers configured
Nov 23 04:59:51 localhost dnsmasq-dhcp[311550]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 04:59:51 localhost dnsmasq[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/addn_hosts - 0 addresses
Nov 23 04:59:51 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/host
Nov 23 04:59:51 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/opts
Nov 23 04:59:51 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 04:59:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:51 localhost podman[311559]: 2025-11-23 09:59:51.50358429 +0000 UTC m=+0.051154633 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 04:59:51 localhost ovn_metadata_agent[159423]: 2025-11-23 09:59:51.517 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 04:59:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:51.561 262721 INFO neutron.agent.dhcp.agent [None req-60653705-4494-4799-8781-f5dcde63bb5c - - - - - -] DHCP configuration for ports {'47bee772-1606-4272-812d-44594265b83d'} is completed#033[00m
Nov 23 04:59:52 localhost systemd[1]: tmp-crun.fuTj3I.mount: Deactivated successfully.
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: ERROR   09:59:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: ERROR   09:59:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: ERROR   09:59:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: ERROR   09:59:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: ERROR   09:59:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 04:59:52 localhost openstack_network_exporter[242118]: 
Nov 23 04:59:52 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:52.631 2 INFO neutron.agent.securitygroups_rpc [None req-633bd2af-73f5-42be-a8e1-16475aa1b324 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:53 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:53.169 2 INFO neutron.agent.securitygroups_rpc [None req-33fb7598-4629-4242-b064-9d05bdc1e723 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:53.197 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:52Z, description=, device_id=51630212-39f7-419f-a981-d27b90506037, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ccabb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790dc8a30>], id=74e50cff-09b0-4162-9b7d-587f8706251d, ip_allocation=immediate, mac_address=fa:16:3e:ba:83:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1144, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:52Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:53 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:53 localhost podman[311597]: 2025-11-23 09:59:53.481635895 +0000 UTC m=+0.063524413 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:59:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:53 localhost nova_compute[281613]: 2025-11-23 09:59:53.535 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:53.723 262721 INFO neutron.agent.dhcp.agent [None req-d1c15899-0e68-420a-9231-619efe973028 - - - - - -] DHCP configuration for ports {'74e50cff-09b0-4162-9b7d-587f8706251d'} is completed#033[00m
Nov 23 04:59:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 04:59:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:54.983 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:54Z, description=, device_id=a1f5ba9b-9711-449f-bd91-f7af48c6ed89, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c3d310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c3d1f0>], id=d473476d-6ff5-4468-8460-53cc24497c0b, ip_allocation=immediate, mac_address=fa:16:3e:20:4a:15, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:47Z, description=, dns_domain=, id=6f2b85f9-6289-4a2d-8bd5-880663b71bed, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1209936747, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37935, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1109, status=ACTIVE, subnets=['5c090195-5309-4e1a-820a-59cc77e5ec40'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:48Z, vlan_transparent=None, network_id=6f2b85f9-6289-4a2d-8bd5-880663b71bed, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1148, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:54Z on network 6f2b85f9-6289-4a2d-8bd5-880663b71bed#033[00m
Nov 23 04:59:55 localhost systemd[1]: tmp-crun.GY3oGy.mount: Deactivated successfully.
Nov 23 04:59:55 localhost dnsmasq[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/addn_hosts - 1 addresses
Nov 23 04:59:55 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/host
Nov 23 04:59:55 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/opts
Nov 23 04:59:55 localhost podman[311634]: 2025-11-23 09:59:55.230910037 +0000 UTC m=+0.099024606 container kill a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 04:59:55 localhost nova_compute[281613]: 2025-11-23 09:59:55.301 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:55 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:55.419 2 INFO neutron.agent.securitygroups_rpc [None req-4a1b8c07-dec2-4711-92de-c07233183ccc 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:55.496 262721 INFO neutron.agent.dhcp.agent [None req-3baf4ffd-8c23-4a46-b0df-3ecded6f8d94 - - - - - -] DHCP configuration for ports {'d473476d-6ff5-4468-8460-53cc24497c0b'} is completed#033[00m
Nov 23 04:59:55 localhost nova_compute[281613]: 2025-11-23 09:59:55.832 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:56.300 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:55Z, description=, device_id=f7264c3d-d91d-4440-8d2f-8f70b552fb05, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd39a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd3f70>], id=ed7107ef-157d-42b0-8e8a-5fa50757ae76, ip_allocation=immediate, mac_address=fa:16:3e:4e:93:0c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1153, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:55Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 04:59:56 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 04:59:56 localhost podman[311673]: 2025-11-23 09:59:56.514114092 +0000 UTC m=+0.057749705 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 04:59:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:56.764 262721 INFO neutron.agent.dhcp.agent [None req-3a7e5f7f-6b34-4ee0-a81a-74ebfc62a0f6 - - - - - -] DHCP configuration for ports {'ed7107ef-157d-42b0-8e8a-5fa50757ae76'} is completed#033[00m
Nov 23 04:59:57 localhost podman[311713]: 2025-11-23 09:59:57.331709878 +0000 UTC m=+0.059745109 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 04:59:57 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 04:59:57 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 04:59:57 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 04:59:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:57.368 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:54Z, description=, device_id=a1f5ba9b-9711-449f-bd91-f7af48c6ed89, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf4070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cf4850>], id=d473476d-6ff5-4468-8460-53cc24497c0b, ip_allocation=immediate, mac_address=fa:16:3e:20:4a:15, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T09:59:47Z, description=, dns_domain=, id=6f2b85f9-6289-4a2d-8bd5-880663b71bed, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1209936747, port_security_enabled=True, project_id=79509bc833494f3598e01347dc55dea9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=37935, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1109, status=ACTIVE, subnets=['5c090195-5309-4e1a-820a-59cc77e5ec40'], tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:48Z, vlan_transparent=None, network_id=6f2b85f9-6289-4a2d-8bd5-880663b71bed, port_security_enabled=False, project_id=79509bc833494f3598e01347dc55dea9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1148, status=DOWN, tags=[], tenant_id=79509bc833494f3598e01347dc55dea9, updated_at=2025-11-23T09:59:54Z on network 6f2b85f9-6289-4a2d-8bd5-880663b71bed#033[00m
Nov 23 04:59:57 localhost dnsmasq[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/addn_hosts - 1 addresses
Nov 23 04:59:57 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/host
Nov 23 04:59:57 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/opts
Nov 23 04:59:57 localhost podman[311750]: 2025-11-23 09:59:57.583931273 +0000 UTC m=+0.061290921 container kill a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 04:59:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 09:59:57.837 262721 INFO neutron.agent.dhcp.agent [None req-162091a1-8cd9-4c01-a441-537ecbc67a90 - - - - - -] DHCP configuration for ports {'d473476d-6ff5-4468-8460-53cc24497c0b'} is completed#033[00m
Nov 23 04:59:57 localhost neutron_sriov_agent[255613]: 2025-11-23 09:59:57.978 2 INFO neutron.agent.securitygroups_rpc [None req-d239cbef-5e7b-4e18-8195-2f02667a16df 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 04:59:58 localhost nova_compute[281613]: 2025-11-23 09:59:58.118 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 04:59:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:00 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:00.267 2 INFO neutron.agent.securitygroups_rpc [None req-07e0e453-973e-4bb8-a740-58232b03adf2 e78ebdfe612745638abad47217c77d70 a40d996843764f32a4281f01703f5aee - - default default] Security group member updated ['e81e3952-d0ad-411e-a904-c021d2ed129c']#033[00m
Nov 23 05:00:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:00:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2092452904' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:00:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:00:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2092452904' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:00:00 localhost nova_compute[281613]: 2025-11-23 10:00:00.304 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:00.363 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T09:59:59Z, description=, device_id=3f27bbbe-0dc2-4917-9ca5-62cea3cc795f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa79161f460>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa79161f820>], id=1a9194ca-6263-4749-8640-9508dedeea08, ip_allocation=immediate, mac_address=fa:16:3e:d2:44:a1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1180, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T09:59:59Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:00 localhost ceph-mon[302802]: overall HEALTH_OK
Nov 23 05:00:00 localhost dnsmasq[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/addn_hosts - 0 addresses
Nov 23 05:00:00 localhost podman[311790]: 2025-11-23 10:00:00.527226482 +0000 UTC m=+0.076382646 container kill a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 05:00:00 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/host
Nov 23 05:00:00 localhost dnsmasq-dhcp[311550]: read /var/lib/neutron/dhcp/6f2b85f9-6289-4a2d-8bd5-880663b71bed/opts
Nov 23 05:00:00 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:00 localhost podman[311816]: 2025-11-23 10:00:00.671287222 +0000 UTC m=+0.069077056 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:00:00 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:00 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:00 localhost nova_compute[281613]: 2025-11-23 10:00:00.707 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:00 localhost kernel: device tapb5bf2b13-fe left promiscuous mode
Nov 23 05:00:00 localhost ovn_controller[153786]: 2025-11-23T10:00:00Z|00083|binding|INFO|Releasing lport b5bf2b13-fea2-4d80-bf73-ce0ead87c324 from this chassis (sb_readonly=0)
Nov 23 05:00:00 localhost ovn_controller[153786]: 2025-11-23T10:00:00Z|00084|binding|INFO|Setting lport b5bf2b13-fea2-4d80-bf73-ce0ead87c324 down in Southbound
Nov 23 05:00:00 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:00.718 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-6f2b85f9-6289-4a2d-8bd5-880663b71bed', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6f2b85f9-6289-4a2d-8bd5-880663b71bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79509bc833494f3598e01347dc55dea9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58c64904-a60a-45d4-92a1-b6c0168fc728, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=b5bf2b13-fea2-4d80-bf73-ce0ead87c324) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:00 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:00.720 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b5bf2b13-fea2-4d80-bf73-ce0ead87c324 in datapath 6f2b85f9-6289-4a2d-8bd5-880663b71bed unbound from our chassis#033[00m
Nov 23 05:00:00 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:00.722 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6f2b85f9-6289-4a2d-8bd5-880663b71bed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:00:00 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:00.724 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec21353-332c-418e-8b88-1d56bfcb8dfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:00 localhost nova_compute[281613]: 2025-11-23 10:00:00.764 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:00 localhost nova_compute[281613]: 2025-11-23 10:00:00.834 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:00 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:00.946 262721 INFO neutron.agent.dhcp.agent [None req-14262173-d694-4435-b1aa-489684a25d34 - - - - - -] DHCP configuration for ports {'1a9194ca-6263-4749-8640-9508dedeea08'} is completed#033[00m
Nov 23 05:00:01 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:01.659 2 INFO neutron.agent.securitygroups_rpc [None req-68228748-d1d1-4e93-958e-faf2dd4d659b 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:02.097 2 INFO neutron.agent.securitygroups_rpc [None req-ba51cecc-c6ac-46c9-99a7-bae53245da97 e78ebdfe612745638abad47217c77d70 a40d996843764f32a4281f01703f5aee - - default default] Security group member updated ['e81e3952-d0ad-411e-a904-c021d2ed129c']#033[00m
Nov 23 05:00:02 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:02 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:02 localhost podman[311860]: 2025-11-23 10:00:02.53241981 +0000 UTC m=+0.044143001 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:00:02 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:02.737 2 INFO neutron.agent.securitygroups_rpc [None req-be23f1ad-34a8-40d2-b634-8fb333cbd4a1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:02 localhost nova_compute[281613]: 2025-11-23 10:00:02.743 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:03 localhost dnsmasq[311550]: exiting on receipt of SIGTERM
Nov 23 05:00:03 localhost podman[311899]: 2025-11-23 10:00:03.020461692 +0000 UTC m=+0.063538184 container kill a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 05:00:03 localhost systemd[1]: libpod-a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c.scope: Deactivated successfully.
Nov 23 05:00:03 localhost podman[311913]: 2025-11-23 10:00:03.088316942 +0000 UTC m=+0.053325313 container died a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:03 localhost podman[311913]: 2025-11-23 10:00:03.126810388 +0000 UTC m=+0.091818749 container cleanup a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:00:03 localhost systemd[1]: libpod-conmon-a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c.scope: Deactivated successfully.
Nov 23 05:00:03 localhost podman[311915]: 2025-11-23 10:00:03.161795456 +0000 UTC m=+0.121513112 container remove a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6f2b85f9-6289-4a2d-8bd5-880663b71bed, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:00:03 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:03.205 262721 INFO neutron.agent.dhcp.agent [None req-0f2479fc-eb77-49a9-bb1c-3fb87d364b14 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:00:03 localhost systemd[1]: var-lib-containers-storage-overlay-6b2f6c1decc6b1c9c59582ef8a72c2c06716c87db97cfa1fbbd97f9b7f5673c3-merged.mount: Deactivated successfully.
Nov 23 05:00:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a067bb1a71672782480addc6d246fe876bc7eb96cb2b819a13d9dbe84617835c-userdata-shm.mount: Deactivated successfully.
Nov 23 05:00:03 localhost systemd[1]: run-netns-qdhcp\x2d6f2b85f9\x2d6289\x2d4a2d\x2d8bd5\x2d880663b71bed.mount: Deactivated successfully.
Nov 23 05:00:03 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:03.709 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:00:04 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:04.657 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:03Z, description=, device_id=6c959180-536e-4cbb-a6e5-3082c340988b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cb1400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cb11c0>], id=345bf296-ed0b-4571-b7ca-c6ffea72a0ad, ip_allocation=immediate, mac_address=fa:16:3e:3f:7d:97, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1198, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:03Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:05 localhost systemd[1]: tmp-crun.bfa2sC.mount: Deactivated successfully.
Nov 23 05:00:05 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:05 localhost podman[311958]: 2025-11-23 10:00:05.005175128 +0000 UTC m=+0.067307386 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:00:05 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:05 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:00:05 localhost podman[311971]: 2025-11-23 10:00:05.138154975 +0000 UTC m=+0.101331830 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, config_id=edpm)
Nov 23 05:00:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:05.160 262721 INFO neutron.agent.dhcp.agent [None req-66abef4a-8c69-470e-bc0a-75d54bdfa2cd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:04Z, description=, device_id=ea5d3627-211a-4c5e-88c3-e240776a6aef, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd39d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd3bb0>], id=d9f2f4a2-06a3-4d8b-a7cc-c7a529063934, ip_allocation=immediate, mac_address=fa:16:3e:eb:fe:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1201, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:04Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:05 localhost podman[311971]: 2025-11-23 10:00:05.181266387 +0000 UTC m=+0.144443252 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 23 05:00:05 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:00:05 localhost podman[311973]: 2025-11-23 10:00:05.199675561 +0000 UTC m=+0.157154010 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:00:05 localhost podman[311973]: 2025-11-23 10:00:05.213996414 +0000 UTC m=+0.171474913 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:00:05 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:00:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:05.249 262721 INFO neutron.agent.dhcp.agent [None req-15589a9f-693e-4826-b1a3-612948aea8e1 - - - - - -] DHCP configuration for ports {'345bf296-ed0b-4571-b7ca-c6ffea72a0ad'} is completed#033[00m
Nov 23 05:00:05 localhost podman[311972]: 2025-11-23 10:00:05.288511917 +0000 UTC m=+0.252188286 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:05 localhost podman[311972]: 2025-11-23 10:00:05.298261165 +0000 UTC m=+0.261937484 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:00:05 localhost nova_compute[281613]: 2025-11-23 10:00:05.307 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:05 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:00:05 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:05 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:05 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:05 localhost podman[312055]: 2025-11-23 10:00:05.442312624 +0000 UTC m=+0.066049562 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:00:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:05.786 262721 INFO neutron.agent.dhcp.agent [None req-9b6dd802-bbdb-4d69-aa73-10032ce7c1c3 - - - - - -] DHCP configuration for ports {'d9f2f4a2-06a3-4d8b-a7cc-c7a529063934'} is completed#033[00m
Nov 23 05:00:05 localhost nova_compute[281613]: 2025-11-23 10:00:05.836 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:05 localhost systemd[1]: tmp-crun.YNcSi9.mount: Deactivated successfully.
Nov 23 05:00:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:06.725 2 INFO neutron.agent.securitygroups_rpc [None req-43a1a170-ce3b-4775-9849-731eb3e4f92f 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:06 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:06 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:06 localhost systemd[1]: tmp-crun.0IvI6F.mount: Deactivated successfully.
Nov 23 05:00:06 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:06 localhost podman[312094]: 2025-11-23 10:00:06.97473722 +0000 UTC m=+0.069165168 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:07 localhost nova_compute[281613]: 2025-11-23 10:00:07.219 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:07 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:07 localhost podman[312131]: 2025-11-23 10:00:07.646635242 +0000 UTC m=+0.058356781 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 05:00:07 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:07 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:08 localhost nova_compute[281613]: 2025-11-23 10:00:08.065 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:08.396 2 INFO neutron.agent.securitygroups_rpc [None req-c2842295-e0a9-464d-87c0-40db2e234814 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:09.267 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:00:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:09.268 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:00:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:09.268 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:00:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:10 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:10.198 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:09Z, description=, device_id=a9f459f9-d00d-413f-990d-df160ae9527f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d16040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d41460>], id=3ad470c1-9054-435f-bd59-824aa88bcc15, ip_allocation=immediate, mac_address=fa:16:3e:7b:9f:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1217, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:09Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:10 localhost nova_compute[281613]: 2025-11-23 10:00:10.311 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:10 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:10 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:10 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:10 localhost podman[312168]: 2025-11-23 10:00:10.414719828 +0000 UTC m=+0.063758209 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:10 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:10.672 262721 INFO neutron.agent.dhcp.agent [None req-82812918-6f4a-436e-af5d-9f03230a2f4f - - - - - -] DHCP configuration for ports {'3ad470c1-9054-435f-bd59-824aa88bcc15'} is completed#033[00m
Nov 23 05:00:10 localhost nova_compute[281613]: 2025-11-23 10:00:10.838 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:11 localhost podman[240144]: time="2025-11-23T10:00:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:00:11 localhost podman[240144]: @ - - [23/Nov/2025:10:00:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:00:11 localhost podman[240144]: @ - - [23/Nov/2025:10:00:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19205 "" "Go-http-client/1.1"
Nov 23 05:00:11 localhost nova_compute[281613]: 2025-11-23 10:00:11.894 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:12 localhost nova_compute[281613]: 2025-11-23 10:00:12.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:12 localhost nova_compute[281613]: 2025-11-23 10:00:12.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:12 localhost nova_compute[281613]: 2025-11-23 10:00:12.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:12 localhost nova_compute[281613]: 2025-11-23 10:00:12.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:00:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:12.043 2 INFO neutron.agent.securitygroups_rpc [None req-311a0dd8-e6e7-491c-ad02-2876db83aabe 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:13 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:13.099 2 INFO neutron.agent.securitygroups_rpc [None req-e5debdc8-a6b2-4239-b0b7-2d251ec66c55 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:13 localhost systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 23 05:00:14 localhost nova_compute[281613]: 2025-11-23 10:00:14.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:14 localhost nova_compute[281613]: 2025-11-23 10:00:14.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:00:14 localhost nova_compute[281613]: 2025-11-23 10:00:14.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:00:14 localhost nova_compute[281613]: 2025-11-23 10:00:14.038 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:00:14 localhost nova_compute[281613]: 2025-11-23 10:00:14.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:14 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:14 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:14 localhost podman[312208]: 2025-11-23 10:00:14.699414076 +0000 UTC m=+0.056479430 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:00:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:00:15 localhost podman[312229]: 2025-11-23 10:00:15.192316211 +0000 UTC m=+0.093581577 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:00:15 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:15.224 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:15Z, description=, device_id=ba499f83-35e9-40c8-9054-0664b13a4d88, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ccab80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cca7c0>], id=3daa4d60-09ec-43f8-a71e-9b84f4095818, ip_allocation=immediate, mac_address=fa:16:3e:a0:dc:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1246, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:15Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:15 localhost systemd[1]: tmp-crun.iBvvV1.mount: Deactivated successfully.
Nov 23 05:00:15 localhost podman[312236]: 2025-11-23 10:00:15.240328397 +0000 UTC m=+0.134590671 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 23 05:00:15 localhost podman[312229]: 2025-11-23 10:00:15.25757236 +0000 UTC m=+0.158837776 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:00:15 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:00:15 localhost podman[312236]: 2025-11-23 10:00:15.304724183 +0000 UTC m=+0.198986487 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:00:15 localhost nova_compute[281613]: 2025-11-23 10:00:15.315 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:15 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:00:15 localhost podman[312228]: 2025-11-23 10:00:15.356764379 +0000 UTC m=+0.261413218 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 23 05:00:15 localhost podman[312228]: 2025-11-23 10:00:15.392005636 +0000 UTC m=+0.296654495 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:00:15 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:00:15 localhost podman[312230]: 2025-11-23 10:00:15.447781615 +0000 UTC m=+0.344781684 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:00:15 localhost podman[312230]: 2025-11-23 10:00:15.4600124 +0000 UTC m=+0.357012449 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:00:15 localhost podman[312316]: 2025-11-23 10:00:15.480847002 +0000 UTC m=+0.065061026 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:00:15 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:15 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:15 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:15 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:00:15 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:15.761 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:15Z, description=, device_id=166a05be-8833-4c50-adc8-43ced708fef2, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c3d7c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ca58e0>], id=d1710032-ffb5-4823-9058-fdabe1ed8231, ip_allocation=immediate, mac_address=fa:16:3e:11:2a:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1247, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:15Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:15 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:15.814 262721 INFO neutron.agent.dhcp.agent [None req-5a7e4c13-b5be-44b5-9f34-95117bc06c2d - - - - - -] DHCP configuration for ports {'3daa4d60-09ec-43f8-a71e-9b84f4095818'} is completed#033[00m
Nov 23 05:00:15 localhost nova_compute[281613]: 2025-11-23 10:00:15.841 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:16 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:16 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:16 localhost systemd[1]: tmp-crun.tlMzAv.mount: Deactivated successfully.
Nov 23 05:00:16 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:16 localhost podman[312365]: 2025-11-23 10:00:16.011192043 +0000 UTC m=+0.067868472 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.036 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.037 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.038 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.038 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.039 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:00:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:16.373 262721 INFO neutron.agent.dhcp.agent [None req-4ad92aae-efd5-4bf9-9604-729bca3a853f - - - - - -] DHCP configuration for ports {'d1710032-ffb5-4823-9058-fdabe1ed8231'} is completed#033[00m
Nov 23 05:00:16 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:00:16 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2257090387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.494 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.709 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.710 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11702MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.711 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.711 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.796 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.796 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:00:16 localhost nova_compute[281613]: 2025-11-23 10:00:16.824 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:00:17 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:17.151 2 INFO neutron.agent.securitygroups_rpc [None req-eec5559f-af9d-40c0-b1ad-486b576202fe 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:00:17 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4073821418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:00:17 localhost nova_compute[281613]: 2025-11-23 10:00:17.215 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:00:17 localhost nova_compute[281613]: 2025-11-23 10:00:17.222 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:00:17 localhost nova_compute[281613]: 2025-11-23 10:00:17.277 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:00:17 localhost nova_compute[281613]: 2025-11-23 10:00:17.280 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:00:17 localhost nova_compute[281613]: 2025-11-23 10:00:17.280 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:00:17 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:17 localhost podman[312447]: 2025-11-23 10:00:17.556235714 +0000 UTC m=+0.046484815 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:00:17 localhost systemd[1]: tmp-crun.8aJUQ6.mount: Deactivated successfully.
Nov 23 05:00:18 localhost nova_compute[281613]: 2025-11-23 10:00:18.281 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:00:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:20 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:20.050 2 INFO neutron.agent.securitygroups_rpc [None req-0b580d26-bcb7-428b-9b48-acb40f408d24 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:20 localhost nova_compute[281613]: 2025-11-23 10:00:20.317 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:20 localhost nova_compute[281613]: 2025-11-23 10:00:20.843 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:21 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:21 localhost podman[312485]: 2025-11-23 10:00:21.276188778 +0000 UTC m=+0.052883621 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:00:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:21 localhost nova_compute[281613]: 2025-11-23 10:00:21.519 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:21 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:00:21 localhost podman[312521]: 2025-11-23 10:00:21.622048381 +0000 UTC m=+0.056892831 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:21 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:00:21 localhost podman[312559]: 2025-11-23 10:00:21.999042687 +0000 UTC m=+0.057851476 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:22 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:22 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:22 localhost nova_compute[281613]: 2025-11-23 10:00:22.095 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: ERROR   10:00:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: ERROR   10:00:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: ERROR   10:00:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: ERROR   10:00:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: ERROR   10:00:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:00:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:00:22 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:22.643 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:22Z, description=, device_id=417e026e-ed80-4900-a44a-4c0993cb7cf7, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd2f70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cd2280>], id=cea706fd-4e62-4fae-a484-95ccc13b18e6, ip_allocation=immediate, mac_address=fa:16:3e:dc:d3:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1274, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:22Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:22 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:00:22 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:22 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:22 localhost podman[312598]: 2025-11-23 10:00:22.934674241 +0000 UTC m=+0.062646239 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:00:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:23.208 262721 INFO neutron.agent.dhcp.agent [None req-4734ed68-bb47-4382-b145-c48e2ceb6fd4 - - - - - -] DHCP configuration for ports {'cea706fd-4e62-4fae-a484-95ccc13b18e6'} is completed#033[00m
Nov 23 05:00:24 localhost nova_compute[281613]: 2025-11-23 10:00:24.077 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:25.304 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:25Z, description=, device_id=a0d218a6-4707-46d0-991f-92b361e7a103, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d38a00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d38700>], id=4d1200ba-b48f-4257-bca2-3269d2e210fa, ip_allocation=immediate, mac_address=fa:16:3e:07:a8:61, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1288, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:25Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:25 localhost nova_compute[281613]: 2025-11-23 10:00:25.371 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:25 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:25 localhost podman[312635]: 2025-11-23 10:00:25.532153238 +0000 UTC m=+0.060963161 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:00:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:25.791 262721 INFO neutron.agent.dhcp.agent [None req-9e75cd32-63db-4047-afbc-3995a1bcbaa8 - - - - - -] DHCP configuration for ports {'4d1200ba-b48f-4257-bca2-3269d2e210fa'} is completed#033[00m
Nov 23 05:00:25 localhost nova_compute[281613]: 2025-11-23 10:00:25.845 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:26 localhost nova_compute[281613]: 2025-11-23 10:00:26.642 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:28 localhost systemd[1]: tmp-crun.UzjrSv.mount: Deactivated successfully.
Nov 23 05:00:28 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:00:28 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:28 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:28 localhost podman[312674]: 2025-11-23 10:00:28.797628572 +0000 UTC m=+0.068313684 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:00:28 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:28.990 2 INFO neutron.agent.securitygroups_rpc [None req-7e066fee-86c7-4254-86ea-1e6408304d23 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m
Nov 23 05:00:29 localhost nova_compute[281613]: 2025-11-23 10:00:29.234 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:29 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:29.791 2 INFO neutron.agent.securitygroups_rpc [None req-28d507eb-defd-436d-9561-198db6b19aa5 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m
Nov 23 05:00:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:30 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:30.072 262721 INFO neutron.agent.linux.ip_lib [None req-bc62ddf8-d8c6-4111-bd12-1606952378fe - - - - - -] Device tape2ea765d-13 cannot be used as it has no MAC address#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.093 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost kernel: device tape2ea765d-13 entered promiscuous mode
Nov 23 05:00:30 localhost NetworkManager[5990]: <info>  [1763892030.1010] manager: (tape2ea765d-13): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.102 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00085|binding|INFO|Claiming lport e2ea765d-13a4-4bd1-9a7d-413908230e1f for this chassis.
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00086|binding|INFO|e2ea765d-13a4-4bd1-9a7d-413908230e1f: Claiming unknown
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.107 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost systemd-udevd[312706]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.124 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-22104048-78a3-4c68-b7b5-e7559026cd59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22104048-78a3-4c68-b7b5-e7559026cd59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bdcbc1cb959a4f9bbeea8ec1d869ac05', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bdf2c07-5eb6-4dfa-92ae-408208a6f5cd, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=e2ea765d-13a4-4bd1-9a7d-413908230e1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.126 159429 INFO neutron.agent.ovn.metadata.agent [-] Port e2ea765d-13a4-4bd1-9a7d-413908230e1f in datapath 22104048-78a3-4c68-b7b5-e7559026cd59 bound to our chassis#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.127 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 22104048-78a3-4c68-b7b5-e7559026cd59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.128 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[be892dbf-1e39-4eb1-9f80-3312fc5b3055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00087|binding|INFO|Setting lport e2ea765d-13a4-4bd1-9a7d-413908230e1f ovn-installed in OVS
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00088|binding|INFO|Setting lport e2ea765d-13a4-4bd1-9a7d-413908230e1f up in Southbound
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.137 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost journal[229736]: ethtool ioctl error on tape2ea765d-13: No such device
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.175 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.211 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.411 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:00:30 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:30 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:30 localhost podman[312753]: 2025-11-23 10:00:30.418512463 +0000 UTC m=+0.079629044 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:30 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:30.563 262721 INFO neutron.agent.linux.ip_lib [None req-8cf6ad79-e9f1-4c4e-8f14-0f704170e60e - - - - - -] Device tap9b95875d-73 cannot be used as it has no MAC address#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.612 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost kernel: device tap9b95875d-73 entered promiscuous mode
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00089|binding|INFO|Claiming lport 9b95875d-73fc-4285-bf07-eeda3bcab60d for this chassis.
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00090|binding|INFO|9b95875d-73fc-4285-bf07-eeda3bcab60d: Claiming unknown
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.620 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost NetworkManager[5990]: <info>  [1763892030.6210] manager: (tap9b95875d-73): new Generic device (/org/freedesktop/NetworkManager/Devices/23)
Nov 23 05:00:30 localhost systemd-udevd[312708]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.633 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-966bcc71-4187-4728-a19e-82678e97db57', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-966bcc71-4187-4728-a19e-82678e97db57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5622ef76e6a4d328c523c81acbdbe75', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42d0a21c-ba54-4e82-a5f9-23689c8629da, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9b95875d-73fc-4285-bf07-eeda3bcab60d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.634 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9b95875d-73fc-4285-bf07-eeda3bcab60d in datapath 966bcc71-4187-4728-a19e-82678e97db57 bound to our chassis#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.636 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 966bcc71-4187-4728-a19e-82678e97db57 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:00:30 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:30.637 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[117ca2eb-92b6-49e2-815c-f505877d715f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00091|binding|INFO|Setting lport 9b95875d-73fc-4285-bf07-eeda3bcab60d ovn-installed in OVS
Nov 23 05:00:30 localhost ovn_controller[153786]: 2025-11-23T10:00:30Z|00092|binding|INFO|Setting lport 9b95875d-73fc-4285-bf07-eeda3bcab60d up in Southbound
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.639 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.642 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.677 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.739 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.771 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.846 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost nova_compute[281613]: 2025-11-23 10:00:30.850 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:30 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:30.858 2 INFO neutron.agent.securitygroups_rpc [None req-d4d902e4-381c-4584-8a85-837df381e7eb e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m
Nov 23 05:00:31 localhost podman[312853]: 
Nov 23 05:00:31 localhost podman[312853]: 2025-11-23 10:00:31.337703336 +0000 UTC m=+0.094097522 container create 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 05:00:31 localhost systemd[1]: Started libpod-conmon-31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376.scope.
Nov 23 05:00:31 localhost podman[312853]: 2025-11-23 10:00:31.29226515 +0000 UTC m=+0.048659376 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:00:31 localhost systemd[1]: Started libcrun container.
Nov 23 05:00:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfccb0beb940d3da2d304a919452b678105c8b62a0c80f7d3f1977f2dc3c978/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:00:31 localhost podman[312853]: 2025-11-23 10:00:31.4235654 +0000 UTC m=+0.179959586 container init 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:00:31 localhost podman[312853]: 2025-11-23 10:00:31.432508465 +0000 UTC m=+0.188902651 container start 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 05:00:31 localhost dnsmasq[312872]: started, version 2.85 cachesize 150
Nov 23 05:00:31 localhost dnsmasq[312872]: DNS service limited to local subnets
Nov 23 05:00:31 localhost dnsmasq[312872]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:00:31 localhost dnsmasq[312872]: warning: no upstream servers configured
Nov 23 05:00:31 localhost dnsmasq-dhcp[312872]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:00:31 localhost dnsmasq[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/addn_hosts - 0 addresses
Nov 23 05:00:31 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/host
Nov 23 05:00:31 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/opts
Nov 23 05:00:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:31.628 262721 INFO neutron.agent.dhcp.agent [None req-d5b53b21-0910-49b1-b0b9-e056306de018 - - - - - -] DHCP configuration for ports {'daac105f-038e-48ca-854b-c66816dc5919'} is completed#033[00m
Nov 23 05:00:32 localhost podman[312898]: 
Nov 23 05:00:32 localhost podman[312898]: 2025-11-23 10:00:32.064305087 +0000 UTC m=+0.087122069 container create 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:00:32 localhost systemd[1]: Started libpod-conmon-0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0.scope.
Nov 23 05:00:32 localhost systemd[1]: Started libcrun container.
Nov 23 05:00:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936d321dd702ab91d2ed86fd51aa6b49e07eee1bb4a3b64179936ba22b486f6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:00:32 localhost podman[312898]: 2025-11-23 10:00:32.024762213 +0000 UTC m=+0.047579205 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:00:32 localhost podman[312898]: 2025-11-23 10:00:32.123232273 +0000 UTC m=+0.146049215 container init 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 23 05:00:32 localhost podman[312898]: 2025-11-23 10:00:32.135627253 +0000 UTC m=+0.158444205 container start 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 23 05:00:32 localhost dnsmasq[312916]: started, version 2.85 cachesize 150
Nov 23 05:00:32 localhost dnsmasq[312916]: DNS service limited to local subnets
Nov 23 05:00:32 localhost dnsmasq[312916]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:00:32 localhost dnsmasq[312916]: warning: no upstream servers configured
Nov 23 05:00:32 localhost dnsmasq-dhcp[312916]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:00:32 localhost dnsmasq[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/addn_hosts - 0 addresses
Nov 23 05:00:32 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/host
Nov 23 05:00:32 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/opts
Nov 23 05:00:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:32.281 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:32Z, description=, device_id=ed55ac8e-d19b-4b86-9376-35319cff980d, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9e310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9e190>], id=817055c0-05c1-4519-9a9e-0bac0510808e, ip_allocation=immediate, mac_address=fa:16:3e:e3:27:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1345, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:32Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:32.320 262721 INFO neutron.agent.dhcp.agent [None req-5c0768b1-ae3a-43c4-b0d4-421137121eb0 - - - - - -] DHCP configuration for ports {'e567273f-7d5a-471e-906e-86a415ac2df3'} is completed#033[00m
Nov 23 05:00:32 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:00:32 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:32 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:32 localhost podman[312935]: 2025-11-23 10:00:32.505632858 +0000 UTC m=+0.061479147 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 23 05:00:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:32.781 262721 INFO neutron.agent.dhcp.agent [None req-cb131d4a-d7fd-4509-a074-a7cb09dafddc - - - - - -] DHCP configuration for ports {'817055c0-05c1-4519-9a9e-0bac0510808e'} is completed#033[00m
Nov 23 05:00:33 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:33.110 2 INFO neutron.agent.securitygroups_rpc [None req-ddca7662-4ee1-4886-8e69-87187e050157 e02e3a6650d04cc49a5785719f724cce cb02ad4f92a44de895fb1e96459d4a8d - - default default] Security group member updated ['19c917c4-9016-402d-a3b3-6d8ae59f7b74']#033[00m
Nov 23 05:00:33 localhost nova_compute[281613]: 2025-11-23 10:00:33.173 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:33 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:33.495 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:32Z, description=, device_id=8fde7c15-ff43-4882-8cdf-309aec357707, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790dc1670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790dbbb80>], id=7eaf7ceb-db20-4976-9ad8-0064b73c9db2, ip_allocation=immediate, mac_address=fa:16:3e:df:d8:5d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1353, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:33Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:33 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:00:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:33 localhost podman[312974]: 2025-11-23 10:00:33.732406074 +0000 UTC m=+0.059667788 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:00:33 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:33.988 262721 INFO neutron.agent.dhcp.agent [None req-d4ee62ce-8402-43e6-9eea-d6a2c1de27ee - - - - - -] DHCP configuration for ports {'7eaf7ceb-db20-4976-9ad8-0064b73c9db2'} is completed#033[00m
Nov 23 05:00:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:34 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:34.876 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:34Z, description=, device_id=ed55ac8e-d19b-4b86-9376-35319cff980d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c60550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c60730>], id=3efebca6-7654-4994-8fd7-f1448bebb69e, ip_allocation=immediate, mac_address=fa:16:3e:0f:9e:94, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:27Z, description=, dns_domain=, id=22104048-78a3-4c68-b7b5-e7559026cd59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1733760222-network, port_security_enabled=True, project_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18528, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1317, status=ACTIVE, subnets=['00bbb54e-951b-4021-978d-bac7c740b642'], tags=[], tenant_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, updated_at=2025-11-23T10:00:29Z, vlan_transparent=None, network_id=22104048-78a3-4c68-b7b5-e7559026cd59, port_security_enabled=False, project_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1357, status=DOWN, tags=[], tenant_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, updated_at=2025-11-23T10:00:34Z on network 22104048-78a3-4c68-b7b5-e7559026cd59#033[00m
Nov 23 05:00:35 localhost dnsmasq[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/addn_hosts - 1 addresses
Nov 23 05:00:35 localhost podman[313012]: 2025-11-23 10:00:35.087446336 +0000 UTC m=+0.063271165 container kill 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:35 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/host
Nov 23 05:00:35 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/opts
Nov 23 05:00:35 localhost nova_compute[281613]: 2025-11-23 10:00:35.414 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:35 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:35.423 262721 INFO neutron.agent.dhcp.agent [None req-2f9d4d18-703d-41c7-8ff4-5af77a3bbaad - - - - - -] DHCP configuration for ports {'3efebca6-7654-4994-8fd7-f1448bebb69e'} is completed#033[00m
Nov 23 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:00:35 localhost nova_compute[281613]: 2025-11-23 10:00:35.850 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:35 localhost podman[313050]: 2025-11-23 10:00:35.885681762 +0000 UTC m=+0.099984522 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 05:00:35 localhost podman[313050]: 2025-11-23 10:00:35.899938823 +0000 UTC m=+0.114241643 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6)
Nov 23 05:00:35 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:00:35 localhost podman[313056]: 2025-11-23 10:00:35.99355004 +0000 UTC m=+0.197149747 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:00:36 localhost podman[313056]: 2025-11-23 10:00:36.00596245 +0000 UTC m=+0.209562107 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:00:36 localhost podman[313051]: 2025-11-23 10:00:36.051653463 +0000 UTC m=+0.259143956 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:00:36 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:00:36 localhost systemd[1]: tmp-crun.7Wgx9m.mount: Deactivated successfully.
Nov 23 05:00:36 localhost podman[313051]: 2025-11-23 10:00:36.091040283 +0000 UTC m=+0.298530776 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 05:00:36 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:00:36 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:36.293 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:35Z, description=, device_id=5c9e1a07-ba42-4edd-adfc-fd75351e9dda, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cb1d60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cb1250>], id=24c78e1b-d18f-4401-8718-e1f01152e64f, ip_allocation=immediate, mac_address=fa:16:3e:ad:9f:a1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1363, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:35Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:36 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:36 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:36 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:36 localhost podman[313166]: 2025-11-23 10:00:36.531621393 +0000 UTC m=+0.065400934 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:00:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:00:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:00:36 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:36.760 262721 INFO neutron.agent.dhcp.agent [None req-cd3c5e3d-b52f-406b-aa86-0ef087a4ad65 - - - - - -] DHCP configuration for ports {'24c78e1b-d18f-4401-8718-e1f01152e64f'} is completed#033[00m
Nov 23 05:00:37 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:37.011 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:34Z, description=, device_id=ed55ac8e-d19b-4b86-9376-35319cff980d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7ddc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7dc10>], id=3efebca6-7654-4994-8fd7-f1448bebb69e, ip_allocation=immediate, mac_address=fa:16:3e:0f:9e:94, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:27Z, description=, dns_domain=, id=22104048-78a3-4c68-b7b5-e7559026cd59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1733760222-network, port_security_enabled=True, project_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=18528, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1317, status=ACTIVE, subnets=['00bbb54e-951b-4021-978d-bac7c740b642'], tags=[], tenant_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, updated_at=2025-11-23T10:00:29Z, vlan_transparent=None, network_id=22104048-78a3-4c68-b7b5-e7559026cd59, port_security_enabled=False, project_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1357, status=DOWN, tags=[], tenant_id=bdcbc1cb959a4f9bbeea8ec1d869ac05, updated_at=2025-11-23T10:00:34Z on network 22104048-78a3-4c68-b7b5-e7559026cd59#033[00m
Nov 23 05:00:37 localhost dnsmasq[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/addn_hosts - 1 addresses
Nov 23 05:00:37 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/host
Nov 23 05:00:37 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/opts
Nov 23 05:00:37 localhost podman[313235]: 2025-11-23 10:00:37.215646807 +0000 UTC m=+0.059051100 container kill 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 05:00:37 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:37.516 262721 INFO neutron.agent.dhcp.agent [None req-5f72715d-0f0c-4310-ae26-be3b34a55413 - - - - - -] DHCP configuration for ports {'3efebca6-7654-4994-8fd7-f1448bebb69e'} is completed#033[00m
Nov 23 05:00:39 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:00:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:40 localhost nova_compute[281613]: 2025-11-23 10:00:40.437 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:40 localhost nova_compute[281613]: 2025-11-23 10:00:40.852 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:40 localhost nova_compute[281613]: 2025-11-23 10:00:40.972 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:41 localhost podman[240144]: time="2025-11-23T10:00:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:00:41 localhost podman[240144]: @ - - [23/Nov/2025:10:00:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159973 "" "Go-http-client/1.1"
Nov 23 05:00:41 localhost podman[240144]: @ - - [23/Nov/2025:10:00:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20159 "" "Go-http-client/1.1"
Nov 23 05:00:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:41.836 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:40Z, description=, device_id=090118ef-c1fa-4398-a366-d88c413b007d, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b72640>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b728e0>], id=a7dc350d-2645-40eb-9c84-9c6eb806aaba, ip_allocation=immediate, mac_address=fa:16:3e:44:74:0d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1379, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:41Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:42 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:42 localhost podman[313271]: 2025-11-23 10:00:42.121380433 +0000 UTC m=+0.058066182 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:00:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:42.464 262721 INFO neutron.agent.dhcp.agent [None req-0edb08fd-acd3-4eb8-82e1-db289344602d - - - - - -] DHCP configuration for ports {'a7dc350d-2645-40eb-9c84-9c6eb806aaba'} is completed#033[00m
Nov 23 05:00:42 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:42 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:42 localhost podman[313308]: 2025-11-23 10:00:42.614777081 +0000 UTC m=+0.062569316 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:00:43 localhost dnsmasq[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/addn_hosts - 0 addresses
Nov 23 05:00:43 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/host
Nov 23 05:00:43 localhost dnsmasq-dhcp[312872]: read /var/lib/neutron/dhcp/22104048-78a3-4c68-b7b5-e7559026cd59/opts
Nov 23 05:00:43 localhost podman[313347]: 2025-11-23 10:00:43.424413409 +0000 UTC m=+0.059706807 container kill 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:00:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:44.392 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:43Z, description=, device_id=5c9e1a07-ba42-4edd-adfc-fd75351e9dda, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c24ca0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c24a60>], id=9a805795-2c2e-4531-8a7b-a098e7e4f2d2, ip_allocation=immediate, mac_address=fa:16:3e:02:a5:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:27Z, description=, dns_domain=, id=966bcc71-4187-4728-a19e-82678e97db57, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1414775608-network, port_security_enabled=True, project_id=f5622ef76e6a4d328c523c81acbdbe75, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44277, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1315, status=ACTIVE, subnets=['364e4243-b8ff-4559-ad77-9f0750986ddd'], tags=[], tenant_id=f5622ef76e6a4d328c523c81acbdbe75, updated_at=2025-11-23T10:00:29Z, vlan_transparent=None, network_id=966bcc71-4187-4728-a19e-82678e97db57, port_security_enabled=False, project_id=f5622ef76e6a4d328c523c81acbdbe75, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1382, status=DOWN, tags=[], tenant_id=f5622ef76e6a4d328c523c81acbdbe75, updated_at=2025-11-23T10:00:43Z on network 966bcc71-4187-4728-a19e-82678e97db57#033[00m
Nov 23 05:00:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:44.499 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:44Z, description=, device_id=b1f1026c-8ba2-4764-87e9-99da8369920a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c70b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c70610>], id=5b3a1f23-c847-4dfd-8192-9421cc66e71a, ip_allocation=immediate, mac_address=fa:16:3e:e0:a2:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1393, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:44Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:44 localhost nova_compute[281613]: 2025-11-23 10:00:44.570 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.572 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.573 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:00:44 localhost nova_compute[281613]: 2025-11-23 10:00:44.578 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:44 localhost kernel: device tape2ea765d-13 left promiscuous mode
Nov 23 05:00:44 localhost ovn_controller[153786]: 2025-11-23T10:00:44Z|00093|binding|INFO|Releasing lport e2ea765d-13a4-4bd1-9a7d-413908230e1f from this chassis (sb_readonly=0)
Nov 23 05:00:44 localhost ovn_controller[153786]: 2025-11-23T10:00:44Z|00094|binding|INFO|Setting lport e2ea765d-13a4-4bd1-9a7d-413908230e1f down in Southbound
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.589 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-22104048-78a3-4c68-b7b5-e7559026cd59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22104048-78a3-4c68-b7b5-e7559026cd59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bdcbc1cb959a4f9bbeea8ec1d869ac05', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8bdf2c07-5eb6-4dfa-92ae-408208a6f5cd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=e2ea765d-13a4-4bd1-9a7d-413908230e1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.590 159429 INFO neutron.agent.ovn.metadata.agent [-] Port e2ea765d-13a4-4bd1-9a7d-413908230e1f in datapath 22104048-78a3-4c68-b7b5-e7559026cd59 unbound from our chassis#033[00m
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.592 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22104048-78a3-4c68-b7b5-e7559026cd59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:00:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:44.593 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe49fd8-c423-4058-b864-a124381cce73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:44 localhost nova_compute[281613]: 2025-11-23 10:00:44.596 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:44 localhost nova_compute[281613]: 2025-11-23 10:00:44.597 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:44 localhost podman[313387]: 2025-11-23 10:00:44.636069521 +0000 UTC m=+0.054309499 container kill 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 23 05:00:44 localhost dnsmasq[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/addn_hosts - 1 addresses
Nov 23 05:00:44 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/host
Nov 23 05:00:44 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/opts
Nov 23 05:00:44 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:44 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:44 localhost podman[313418]: 2025-11-23 10:00:44.741195793 +0000 UTC m=+0.062570896 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:44.872 262721 INFO neutron.agent.dhcp.agent [None req-8e93f0a8-a8e5-4dbd-abfb-9f146be817ee - - - - - -] DHCP configuration for ports {'9a805795-2c2e-4531-8a7b-a098e7e4f2d2'} is completed#033[00m
Nov 23 05:00:45 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:45.020 262721 INFO neutron.agent.dhcp.agent [None req-92d64a68-94c9-4332-8a7e-818a56935ef9 - - - - - -] DHCP configuration for ports {'5b3a1f23-c847-4dfd-8192-9421cc66e71a'} is completed#033[00m
Nov 23 05:00:45 localhost nova_compute[281613]: 2025-11-23 10:00:45.441 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:45 localhost nova_compute[281613]: 2025-11-23 10:00:45.853 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:00:46 localhost podman[313446]: 2025-11-23 10:00:46.19447114 +0000 UTC m=+0.091856500 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:00:46 localhost podman[313446]: 2025-11-23 10:00:46.203326182 +0000 UTC m=+0.100711572 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:00:46 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:00:46 localhost podman[313445]: 2025-11-23 10:00:46.252197483 +0000 UTC m=+0.152917524 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:46 localhost podman[313445]: 2025-11-23 10:00:46.270006871 +0000 UTC m=+0.170726902 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:46 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:00:46 localhost podman[313444]: 2025-11-23 10:00:46.341710127 +0000 UTC m=+0.245575804 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:46 localhost podman[313447]: 2025-11-23 10:00:46.416446985 +0000 UTC m=+0.308006154 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:00:46 localhost podman[313444]: 2025-11-23 10:00:46.422588414 +0000 UTC m=+0.326454101 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:00:46 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:00:46 localhost podman[313447]: 2025-11-23 10:00:46.46510975 +0000 UTC m=+0.356668979 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:00:46 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:00:46 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:46 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:46 localhost podman[313540]: 2025-11-23 10:00:46.670863491 +0000 UTC m=+0.059899023 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:00:46 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:47 localhost systemd[1]: tmp-crun.HQlUQ4.mount: Deactivated successfully.
Nov 23 05:00:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:48.429 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:48Z, description=, device_id=35a9f04a-16c9-4308-a588-bdf2aa3c107c, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c70340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c70dc0>], id=f34ee945-4a46-4544-b241-446f2f32a278, ip_allocation=immediate, mac_address=fa:16:3e:e5:1d:b9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1401, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:48Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:48 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:48 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:48 localhost podman[313578]: 2025-11-23 10:00:48.659449814 +0000 UTC m=+0.069213559 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 05:00:48 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:48.876 262721 INFO neutron.agent.dhcp.agent [None req-78d4b587-9cdf-41d1-8e18-552faa088570 - - - - - -] DHCP configuration for ports {'f34ee945-4a46-4544-b241-446f2f32a278'} is completed#033[00m
Nov 23 05:00:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:49 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:49.863 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:43Z, description=, device_id=5c9e1a07-ba42-4edd-adfc-fd75351e9dda, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7dcd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c7dbe0>], id=9a805795-2c2e-4531-8a7b-a098e7e4f2d2, ip_allocation=immediate, mac_address=fa:16:3e:02:a5:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:00:27Z, description=, dns_domain=, id=966bcc71-4187-4728-a19e-82678e97db57, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-1414775608-network, port_security_enabled=True, project_id=f5622ef76e6a4d328c523c81acbdbe75, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44277, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1315, status=ACTIVE, subnets=['364e4243-b8ff-4559-ad77-9f0750986ddd'], tags=[], tenant_id=f5622ef76e6a4d328c523c81acbdbe75, updated_at=2025-11-23T10:00:29Z, vlan_transparent=None, network_id=966bcc71-4187-4728-a19e-82678e97db57, port_security_enabled=False, project_id=f5622ef76e6a4d328c523c81acbdbe75, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1382, status=DOWN, tags=[], tenant_id=f5622ef76e6a4d328c523c81acbdbe75, updated_at=2025-11-23T10:00:43Z on network 966bcc71-4187-4728-a19e-82678e97db57#033[00m
Nov 23 05:00:50 localhost dnsmasq[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/addn_hosts - 1 addresses
Nov 23 05:00:50 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/host
Nov 23 05:00:50 localhost podman[313616]: 2025-11-23 10:00:50.068731044 +0000 UTC m=+0.058018162 container kill 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:00:50 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/opts
Nov 23 05:00:50 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:50.302 2 INFO neutron.agent.securitygroups_rpc [None req-76f35429-371d-4a0a-a261-a7949dd94068 e59892284e454ae28c30542a06194f67 7d06d32932c14944b00061256a49a5ca - - default default] Security group member updated ['3d66d90b-639c-4111-b259-a5454103aaa3']#033[00m
Nov 23 05:00:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:50.341 262721 INFO neutron.agent.dhcp.agent [None req-17107301-3baf-4b46-ac12-9b198e73b8c4 - - - - - -] DHCP configuration for ports {'9a805795-2c2e-4531-8a7b-a098e7e4f2d2'} is completed#033[00m
Nov 23 05:00:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:50.358 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:49Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d38730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cba250>], id=ccfc5649-2c95-4410-9ece-a959ecbe18fe, ip_allocation=immediate, mac_address=fa:16:3e:90:aa:b2, name=tempest-RoutersAdminNegativeIpV6Test-2063637475, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=True, project_id=7d06d32932c14944b00061256a49a5ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['3d66d90b-639c-4111-b259-a5454103aaa3'], standard_attr_id=1412, status=DOWN, tags=[], tenant_id=7d06d32932c14944b00061256a49a5ca, updated_at=2025-11-23T10:00:50Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:50 localhost nova_compute[281613]: 2025-11-23 10:00:50.443 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:50 localhost systemd[1]: tmp-crun.u39mvF.mount: Deactivated successfully.
Nov 23 05:00:50 localhost podman[313655]: 2025-11-23 10:00:50.612125073 +0000 UTC m=+0.079922702 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:50 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 6 addresses
Nov 23 05:00:50 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:50 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:50 localhost nova_compute[281613]: 2025-11-23 10:00:50.779 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:50.791 262721 INFO neutron.agent.dhcp.agent [None req-92f59a0a-f450-4e26-a131-02f3f7620a6d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:49Z, description=, device_id=f7a0be90-e300-4a73-a0b8-924cc670fd55, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ca9370>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c31f40>], id=c234e772-4845-4844-8ab0-875559aa1627, ip_allocation=immediate, mac_address=fa:16:3e:46:0f:e6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1405, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:49Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:50 localhost nova_compute[281613]: 2025-11-23 10:00:50.856 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:51.032 262721 INFO neutron.agent.dhcp.agent [None req-2b84f0d4-007a-4a5a-acb4-e72c8d3ceed6 - - - - - -] DHCP configuration for ports {'ccfc5649-2c95-4410-9ece-a959ecbe18fe'} is completed#033[00m
Nov 23 05:00:51 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 7 addresses
Nov 23 05:00:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:51 localhost podman[313694]: 2025-11-23 10:00:51.2053967 +0000 UTC m=+0.063343769 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:00:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:51.503 262721 INFO neutron.agent.dhcp.agent [None req-748e9868-0fee-4a56-a49e-9384872b8bcb - - - - - -] DHCP configuration for ports {'c234e772-4845-4844-8ab0-875559aa1627'} is completed#033[00m
Nov 23 05:00:51 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 6 addresses
Nov 23 05:00:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:51 localhost podman[313733]: 2025-11-23 10:00:51.590645562 +0000 UTC m=+0.059232325 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: ERROR   10:00:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: ERROR   10:00:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: ERROR   10:00:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: ERROR   10:00:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: ERROR   10:00:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:00:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:00:52 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:52.343 2 INFO neutron.agent.securitygroups_rpc [None req-aaa2c11d-b613-412d-8872-49d855ed78d3 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:00:52 localhost dnsmasq[312872]: exiting on receipt of SIGTERM
Nov 23 05:00:52 localhost podman[313770]: 2025-11-23 10:00:52.364727246 +0000 UTC m=+0.072519449 container kill 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 05:00:52 localhost systemd[1]: tmp-crun.KwdkgB.mount: Deactivated successfully.
Nov 23 05:00:52 localhost systemd[1]: libpod-31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376.scope: Deactivated successfully.
Nov 23 05:00:52 localhost podman[313782]: 2025-11-23 10:00:52.441606924 +0000 UTC m=+0.061609431 container died 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:00:52 localhost systemd[1]: tmp-crun.peFzwm.mount: Deactivated successfully.
Nov 23 05:00:52 localhost podman[313782]: 2025-11-23 10:00:52.479437391 +0000 UTC m=+0.099439838 container cleanup 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:00:52 localhost systemd[1]: libpod-conmon-31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376.scope: Deactivated successfully.
Nov 23 05:00:52 localhost podman[313784]: 2025-11-23 10:00:52.520755583 +0000 UTC m=+0.132033230 container remove 31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-22104048-78a3-4c68-b7b5-e7559026cd59, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:00:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:52.885 262721 INFO neutron.agent.dhcp.agent [None req-c6b7e087-f5ad-4c5f-be64-69198250ab2a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:00:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:52.886 262721 INFO neutron.agent.dhcp.agent [None req-c6b7e087-f5ad-4c5f-be64-69198250ab2a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:00:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:52.889 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:00:53 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:53.024 2 INFO neutron.agent.securitygroups_rpc [None req-6f916793-e69f-491a-861e-c8f5876d7582 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:00:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:53.082 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:53.084 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:00:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:53.087 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:00:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:53.088 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[303ede32-7dad-46ee-91d1-057617c39f92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:53 localhost systemd[1]: var-lib-containers-storage-overlay-2bfccb0beb940d3da2d304a919452b678105c8b62a0c80f7d3f1977f2dc3c978-merged.mount: Deactivated successfully.
Nov 23 05:00:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31fd698ca67a9a1cc831b85d938dc44bb99dec4f0efde1679b9c51831d8ef376-userdata-shm.mount: Deactivated successfully.
Nov 23 05:00:53 localhost systemd[1]: run-netns-qdhcp\x2d22104048\x2d78a3\x2d4c68\x2db7b5\x2de7559026cd59.mount: Deactivated successfully.
Nov 23 05:00:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:53.576 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:00:53 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:53.682 2 INFO neutron.agent.securitygroups_rpc [None req-8312fbc9-ef14-45ec-8246-3e5b81a28890 e59892284e454ae28c30542a06194f67 7d06d32932c14944b00061256a49a5ca - - default default] Security group member updated ['3d66d90b-639c-4111-b259-a5454103aaa3']#033[00m
Nov 23 05:00:53 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:53 localhost podman[313828]: 2025-11-23 10:00:53.932146531 +0000 UTC m=+0.048641774 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 05:00:54 localhost sshd[313848]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:00:54 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:54.433 2 INFO neutron.agent.securitygroups_rpc [None req-0abb4626-8029-4616-a7f3-bc7ec334c676 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:00:55 localhost nova_compute[281613]: 2025-11-23 10:00:55.446 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:55 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:55.778 2 INFO neutron.agent.securitygroups_rpc [None req-f375f660-8a27-403d-b89f-e04d1f758be2 2cfd21f178604be289d8bb16b3b9c18f 0f8848490fb54a5cb41f1607121a115c - - default default] Security group member updated ['c9d46e70-8b37-41f1-b62d-e1679c8d4c9c']#033[00m
Nov 23 05:00:55 localhost nova_compute[281613]: 2025-11-23 10:00:55.858 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:55 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:00:55 localhost podman[313868]: 2025-11-23 10:00:55.977335797 +0000 UTC m=+0.061945040 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:55 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:55 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:00:56.606 2 INFO neutron.agent.securitygroups_rpc [None req-9e459c8e-445e-40f9-9db8-24291ed822a4 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:00:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:57.794 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:00:57Z, description=, device_id=623245a3-0a87-4a1f-a1de-9520196263a1, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cba040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cbac10>], id=33637e72-67db-4c0d-bc42-5cd8470d3a5d, ip_allocation=immediate, mac_address=fa:16:3e:14:62:eb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1438, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:00:57Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:00:57 localhost podman[313906]: 2025-11-23 10:00:57.996028846 +0000 UTC m=+0.059822211 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:00:57 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:00:57 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:00:57 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:00:58 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:00:58.226 262721 INFO neutron.agent.dhcp.agent [None req-7b71bb01-ca39-4141-a47f-07915da8947a - - - - - -] DHCP configuration for ports {'33637e72-67db-4c0d-bc42-5cd8470d3a5d'} is completed#033[00m
Nov 23 05:00:58 localhost dnsmasq[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/addn_hosts - 0 addresses
Nov 23 05:00:58 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/host
Nov 23 05:00:58 localhost podman[313945]: 2025-11-23 10:00:58.256069126 +0000 UTC m=+0.069316592 container kill 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 05:00:58 localhost dnsmasq-dhcp[312916]: read /var/lib/neutron/dhcp/966bcc71-4187-4728-a19e-82678e97db57/opts
Nov 23 05:00:58 localhost nova_compute[281613]: 2025-11-23 10:00:58.508 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:58 localhost kernel: device tap9b95875d-73 left promiscuous mode
Nov 23 05:00:58 localhost ovn_controller[153786]: 2025-11-23T10:00:58Z|00095|binding|INFO|Releasing lport 9b95875d-73fc-4285-bf07-eeda3bcab60d from this chassis (sb_readonly=0)
Nov 23 05:00:58 localhost ovn_controller[153786]: 2025-11-23T10:00:58Z|00096|binding|INFO|Setting lport 9b95875d-73fc-4285-bf07-eeda3bcab60d down in Southbound
Nov 23 05:00:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:58.524 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-966bcc71-4187-4728-a19e-82678e97db57', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-966bcc71-4187-4728-a19e-82678e97db57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5622ef76e6a4d328c523c81acbdbe75', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42d0a21c-ba54-4e82-a5f9-23689c8629da, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9b95875d-73fc-4285-bf07-eeda3bcab60d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:00:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:58.526 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9b95875d-73fc-4285-bf07-eeda3bcab60d in datapath 966bcc71-4187-4728-a19e-82678e97db57 unbound from our chassis#033[00m
Nov 23 05:00:58 localhost nova_compute[281613]: 2025-11-23 10:00:58.527 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:00:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:58.529 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 966bcc71-4187-4728-a19e-82678e97db57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:00:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:00:58.530 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[d92d31ac-5fdb-4ec1-848d-faf9130c9df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:00:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:00 localhost nova_compute[281613]: 2025-11-23 10:01:00.449 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:00 localhost nova_compute[281613]: 2025-11-23 10:01:00.911 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:01 localhost podman[313999]: 2025-11-23 10:01:01.992215393 +0000 UTC m=+0.060864339 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:01:01 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:01:01 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:01 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:02 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:02.095 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:02 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:02.097 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:02 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:02.101 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:02 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:02.102 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e030756c-c064-497e-9d1a-1be1a49f06d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:02 localhost podman[314036]: 2025-11-23 10:01:02.739333418 +0000 UTC m=+0.062836424 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:01:02 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:01:02 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:02 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:02.867 2 INFO neutron.agent.securitygroups_rpc [None req-6be775ef-69f0-4d4b-8550-9fb2b9ad120e 2cfd21f178604be289d8bb16b3b9c18f 0f8848490fb54a5cb41f1607121a115c - - default default] Security group member updated ['c9d46e70-8b37-41f1-b62d-e1679c8d4c9c']#033[00m
Nov 23 05:01:03 localhost nova_compute[281613]: 2025-11-23 10:01:03.085 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:03 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:03.405 2 INFO neutron.agent.securitygroups_rpc [None req-4c75c4e7-1ac8-40bf-8bb2-ec0e75e4208b a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:04 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:04.462 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:03Z, description=, device_id=8cccc422-2fae-491e-9c51-458fb294e626, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cca760>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ccadc0>], id=e2b1a73d-82c1-44d2-8de6-b3f40dc3eff1, ip_allocation=immediate, mac_address=fa:16:3e:ef:62:db, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1464, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:01:04Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:01:04 localhost systemd[1]: tmp-crun.fTQben.mount: Deactivated successfully.
Nov 23 05:01:04 localhost dnsmasq[312916]: exiting on receipt of SIGTERM
Nov 23 05:01:04 localhost podman[314073]: 2025-11-23 10:01:04.480466317 +0000 UTC m=+0.075882351 container kill 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:04 localhost systemd[1]: libpod-0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0.scope: Deactivated successfully.
Nov 23 05:01:04 localhost podman[314087]: 2025-11-23 10:01:04.558694492 +0000 UTC m=+0.058779723 container died 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:01:04 localhost podman[314087]: 2025-11-23 10:01:04.603944752 +0000 UTC m=+0.104029943 container cleanup 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:01:04 localhost systemd[1]: libpod-conmon-0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0.scope: Deactivated successfully.
Nov 23 05:01:04 localhost podman[314088]: 2025-11-23 10:01:04.691671017 +0000 UTC m=+0.186571715 container remove 0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-966bcc71-4187-4728-a19e-82678e97db57, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 05:01:04 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:04.725 262721 INFO neutron.agent.dhcp.agent [None req-b9635b6c-af46-4e37-bef2-d2ba21dd9ea6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:04 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:01:04 localhost podman[314129]: 2025-11-23 10:01:04.817197449 +0000 UTC m=+0.066171866 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:01:04 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:04 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:05.114 262721 INFO neutron.agent.dhcp.agent [None req-eee92a44-6b4a-4eed-9336-16822e3ad80a - - - - - -] DHCP configuration for ports {'e2b1a73d-82c1-44d2-8de6-b3f40dc3eff1'} is completed#033[00m
Nov 23 05:01:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:05.118 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:05 localhost nova_compute[281613]: 2025-11-23 10:01:05.452 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:05 localhost systemd[1]: tmp-crun.Fi0ZhK.mount: Deactivated successfully.
Nov 23 05:01:05 localhost systemd[1]: var-lib-containers-storage-overlay-936d321dd702ab91d2ed86fd51aa6b49e07eee1bb4a3b64179936ba22b486f6a-merged.mount: Deactivated successfully.
Nov 23 05:01:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cd82402c3e7e5d17a689f5ef78d9d3b69442826df06f6e5a594219d4fafebc0-userdata-shm.mount: Deactivated successfully.
Nov 23 05:01:05 localhost systemd[1]: run-netns-qdhcp\x2d966bcc71\x2d4187\x2d4728\x2da19e\x2d82678e97db57.mount: Deactivated successfully.
Nov 23 05:01:05 localhost nova_compute[281613]: 2025-11-23 10:01:05.912 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:01:06 localhost podman[314151]: 2025-11-23 10:01:06.186200085 +0000 UTC m=+0.086156643 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:01:06 localhost podman[314151]: 2025-11-23 10:01:06.201905665 +0000 UTC m=+0.101862203 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm)
Nov 23 05:01:06 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:01:06 localhost podman[314152]: 2025-11-23 10:01:06.289464025 +0000 UTC m=+0.186894244 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:01:06 localhost podman[314152]: 2025-11-23 10:01:06.301022283 +0000 UTC m=+0.198452442 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:01:06 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:01:06 localhost podman[314182]: 2025-11-23 10:01:06.395163564 +0000 UTC m=+0.183683297 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:01:06 localhost podman[314182]: 2025-11-23 10:01:06.410018131 +0000 UTC m=+0.198537844 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:01:06 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:01:06 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e113 e113: 6 total, 6 up, 6 in
Nov 23 05:01:07 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:01:07 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:07 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:07 localhost podman[314230]: 2025-11-23 10:01:07.024760836 +0000 UTC m=+0.048518371 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:01:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:07.208 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.3 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:07.211 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:07.214 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:07.215 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d3a0d0-9a01-444c-977b-64bdf9365c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e114 e114: 6 total, 6 up, 6 in
Nov 23 05:01:08 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:08.183 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:07Z, description=, device_id=de9f983f-1f13-46ad-82e3-496e0a20a1d1, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b24790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b241f0>], id=bef716b6-1f20-46ae-b60c-60e603ef87f9, ip_allocation=immediate, mac_address=fa:16:3e:71:ce:e1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1471, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:01:07Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:01:08 localhost podman[314269]: 2025-11-23 10:01:08.398662476 +0000 UTC m=+0.058930927 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 05:01:08 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:01:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:08.489 2 INFO neutron.agent.securitygroups_rpc [None req-d30670ed-c29e-4282-92c2-f353a53316ea 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:08 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:08.793 262721 INFO neutron.agent.dhcp.agent [None req-36e01da2-7b13-4fe2-96b4-692b7b0bcb3c - - - - - -] DHCP configuration for ports {'bef716b6-1f20-46ae-b60c-60e603ef87f9'} is completed#033[00m
Nov 23 05:01:08 localhost systemd[1]: tmp-crun.atgKuQ.mount: Deactivated successfully.
Nov 23 05:01:08 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:01:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:08 localhost podman[314304]: 2025-11-23 10:01:08.924301948 +0000 UTC m=+0.072296663 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 05:01:09 localhost nova_compute[281613]: 2025-11-23 10:01:09.225 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:09.268 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:01:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:09.269 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:01:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:09.269 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:01:09 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:09.406 2 INFO neutron.agent.securitygroups_rpc [None req-ee598349-ae04-45e4-9403-8b439fe516e0 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:01:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:01:10 localhost nova_compute[281613]: 2025-11-23 10:01:10.482 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:10 localhost nova_compute[281613]: 2025-11-23 10:01:10.914 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:11 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:11.274 2 INFO neutron.agent.securitygroups_rpc [None req-4b64e528-1440-49a5-b870-0a0f4d60c275 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:11 localhost podman[240144]: time="2025-11-23T10:01:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:01:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:11.300 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:11 localhost podman[240144]: @ - - [23/Nov/2025:10:01:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:01:11 localhost podman[240144]: @ - - [23/Nov/2025:10:01:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19202 "" "Go-http-client/1.1"
Nov 23 05:01:12 localhost nova_compute[281613]: 2025-11-23 10:01:12.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 e115: 6 total, 6 up, 6 in
Nov 23 05:01:13 localhost nova_compute[281613]: 2025-11-23 10:01:13.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:13 localhost nova_compute[281613]: 2025-11-23 10:01:13.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:01:13 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:13.096 2 INFO neutron.agent.securitygroups_rpc [None req-a6cbb5e4-51d9-4058-bf96-80e547b16a25 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:13 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:13.814 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.3 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:13 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:13.816 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:13 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:13.819 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:13 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:13.820 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc29280-c7e6-44b0-b0cb-33526e8a7fa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:14 localhost nova_compute[281613]: 2025-11-23 10:01:14.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.040 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.041 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.484 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:15 localhost nova_compute[281613]: 2025-11-23 10:01:15.917 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:16.334 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:16.336 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:16.339 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:16.340 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[576a61ef-7ce1-4a4b-97dc-568354b08edc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:16 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:16.637 2 INFO neutron.agent.securitygroups_rpc [None req-49a4dc10-ae4e-41b1-8135-2f2053e37dc6 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:16.690 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:17 localhost nova_compute[281613]: 2025-11-23 10:01:17.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:01:17 localhost systemd[1]: tmp-crun.TaauWl.mount: Deactivated successfully.
Nov 23 05:01:17 localhost podman[314329]: 2025-11-23 10:01:17.212595108 +0000 UTC m=+0.115166419 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3)
Nov 23 05:01:17 localhost podman[314329]: 2025-11-23 10:01:17.2520625 +0000 UTC m=+0.154633791 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:01:17 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:01:17 localhost podman[314330]: 2025-11-23 10:01:17.290490313 +0000 UTC m=+0.190502685 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:01:17 localhost podman[314330]: 2025-11-23 10:01:17.29986907 +0000 UTC m=+0.199881462 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:01:17 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:01:17 localhost podman[314328]: 2025-11-23 10:01:17.347049664 +0000 UTC m=+0.254589711 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:17 localhost podman[314328]: 2025-11-23 10:01:17.355063664 +0000 UTC m=+0.262603721 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:17 localhost podman[314336]: 2025-11-23 10:01:17.39540831 +0000 UTC m=+0.290805805 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 23 05:01:17 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:01:17 localhost podman[314336]: 2025-11-23 10:01:17.488891033 +0000 UTC m=+0.384288548 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 23 05:01:17 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.032 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.032 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.056 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.056 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.057 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.057 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.057 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:01:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:01:18 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/525196349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.500 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:01:18 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:18.564 2 INFO neutron.agent.securitygroups_rpc [None req-599670a7-0640-44a2-ad54-406dd4624d40 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.774 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.776 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11660MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.776 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.777 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.840 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.840 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:01:18 localhost nova_compute[281613]: 2025-11-23 10:01:18.858 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:01:19 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:19.259 2 INFO neutron.agent.securitygroups_rpc [None req-7242a629-d88b-4313-9e67-39aa189122ef fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:01:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1791639122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:01:19 localhost nova_compute[281613]: 2025-11-23 10:01:19.294 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:01:19 localhost nova_compute[281613]: 2025-11-23 10:01:19.300 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:01:19 localhost nova_compute[281613]: 2025-11-23 10:01:19.323 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:01:19 localhost nova_compute[281613]: 2025-11-23 10:01:19.326 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:01:19 localhost nova_compute[281613]: 2025-11-23 10:01:19.326 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:01:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:20 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:20.230 2 INFO neutron.agent.securitygroups_rpc [None req-55e03d23-65b9-4d8a-853a-a5da3e2be7a3 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:20 localhost nova_compute[281613]: 2025-11-23 10:01:20.435 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:20 localhost nova_compute[281613]: 2025-11-23 10:01:20.487 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:20 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:20.785 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:20Z, description=, device_id=2d611b4e-6541-4913-986b-396d76162510, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cba430>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cbab80>], id=eccd9461-a422-4b62-b109-7d671c15da5d, ip_allocation=immediate, mac_address=fa:16:3e:99:9c:77, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1534, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:01:20Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:01:20 localhost nova_compute[281613]: 2025-11-23 10:01:20.919 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:21 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:01:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:21 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:21 localhost podman[314472]: 2025-11-23 10:01:21.009706807 +0000 UTC m=+0.051053621 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:21.234 262721 INFO neutron.agent.dhcp.agent [None req-a47c9766-828a-4ead-8779-c62e8dd8adeb - - - - - -] DHCP configuration for ports {'eccd9461-a422-4b62-b109-7d671c15da5d'} is completed#033[00m
Nov 23 05:01:21 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:21.877 2 INFO neutron.agent.securitygroups_rpc [None req-da0cfd7f-9c49-4dab-bd78-effdeef69255 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: ERROR   10:01:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: ERROR   10:01:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: ERROR   10:01:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: ERROR   10:01:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: ERROR   10:01:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:01:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.616318) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083616363, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2550, "num_deletes": 259, "total_data_size": 3401241, "memory_usage": 3459864, "flush_reason": "Manual Compaction"}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083629817, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2197040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18548, "largest_seqno": 21093, "table_properties": {"data_size": 2187848, "index_size": 5697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20399, "raw_average_key_size": 21, "raw_value_size": 2168831, "raw_average_value_size": 2254, "num_data_blocks": 249, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891914, "oldest_key_time": 1763891914, "file_creation_time": 1763892083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13548 microseconds, and 8069 cpu microseconds.
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.629866) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2197040 bytes OK
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.629891) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.631846) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.631867) EVENT_LOG_v1 {"time_micros": 1763892083631861, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.631888) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3389840, prev total WAL file size 3389840, number of live WAL files 2.
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.632878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end)
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2145KB)], [30(16MB)]
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083632958, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19079938, "oldest_snapshot_seqno": -1}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12595 keys, 16078811 bytes, temperature: kUnknown
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083703462, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 16078811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16007094, "index_size": 39168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 338649, "raw_average_key_size": 26, "raw_value_size": 15792380, "raw_average_value_size": 1253, "num_data_blocks": 1477, "num_entries": 12595, "num_filter_entries": 12595, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892083, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.703888) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 16078811 bytes
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.705610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 270.1 rd, 227.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 16.1 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(16.0) write-amplify(7.3) OK, records in: 13128, records dropped: 533 output_compression: NoCompression
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.705659) EVENT_LOG_v1 {"time_micros": 1763892083705646, "job": 16, "event": "compaction_finished", "compaction_time_micros": 70636, "compaction_time_cpu_micros": 46752, "output_level": 6, "num_output_files": 1, "total_output_size": 16078811, "num_input_records": 13128, "num_output_records": 12595, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083706118, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892083708412, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.632745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.708524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.708553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.708556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.708560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:23 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:23.708563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:24 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:24.120 2 INFO neutron.agent.securitygroups_rpc [None req-cef856d0-bc0f-421d-b6e3-a61c28d38f99 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.374 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:24.491 262721 INFO neutron.agent.linux.ip_lib [None req-5f6c27bc-c569-4425-976c-45440b4b1346 - - - - - -] Device tap9bc2ac88-7c cannot be used as it has no MAC address#033[00m
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.513 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost kernel: device tap9bc2ac88-7c entered promiscuous mode
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.524 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost NetworkManager[5990]: <info>  [1763892084.5258] manager: (tap9bc2ac88-7c): new Generic device (/org/freedesktop/NetworkManager/Devices/24)
Nov 23 05:01:24 localhost ovn_controller[153786]: 2025-11-23T10:01:24Z|00097|binding|INFO|Claiming lport 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa for this chassis.
Nov 23 05:01:24 localhost ovn_controller[153786]: 2025-11-23T10:01:24Z|00098|binding|INFO|9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa: Claiming unknown
Nov 23 05:01:24 localhost systemd-udevd[314503]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:01:24 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:24.541 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-cda26bc7-7b95-4a20-a6d7-3cb1e496426d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cda26bc7-7b95-4a20-a6d7-3cb1e496426d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a95d56ceca02400bb048e86377bec83f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bff41337-f066-4eee-843d-cfc39b11288b, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:24 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:24.544 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa in datapath cda26bc7-7b95-4a20-a6d7-3cb1e496426d bound to our chassis#033[00m
Nov 23 05:01:24 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:24.545 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network cda26bc7-7b95-4a20-a6d7-3cb1e496426d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:01:24 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:24.548 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[7c41e241-3379-40a6-b2bb-8ddf06792d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:24 localhost ovn_controller[153786]: 2025-11-23T10:01:24Z|00099|binding|INFO|Setting lport 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa ovn-installed in OVS
Nov 23 05:01:24 localhost ovn_controller[153786]: 2025-11-23T10:01:24Z|00100|binding|INFO|Setting lport 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa up in Southbound
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.573 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.614 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost nova_compute[281613]: 2025-11-23 10:01:24.644 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:25.104 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:25.106 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:25.110 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:25.111 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc82525-00be-43b1-b768-050965f097fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:25 localhost nova_compute[281613]: 2025-11-23 10:01:25.543 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:25 localhost podman[314558]: 
Nov 23 05:01:25 localhost podman[314558]: 2025-11-23 10:01:25.571723639 +0000 UTC m=+0.123537849 container create 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:01:25 localhost podman[314558]: 2025-11-23 10:01:25.495944232 +0000 UTC m=+0.047758462 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:01:25 localhost systemd[1]: Started libpod-conmon-9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279.scope.
Nov 23 05:01:25 localhost systemd[1]: Started libcrun container.
Nov 23 05:01:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dcee509f4cc9258c522c75d044d4384291b91169893e548202c2730b904533/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:01:25 localhost podman[314558]: 2025-11-23 10:01:25.65347035 +0000 UTC m=+0.205284560 container init 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:01:25 localhost podman[314558]: 2025-11-23 10:01:25.661197512 +0000 UTC m=+0.213011762 container start 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 05:01:25 localhost dnsmasq[314576]: started, version 2.85 cachesize 150
Nov 23 05:01:25 localhost dnsmasq[314576]: DNS service limited to local subnets
Nov 23 05:01:25 localhost dnsmasq[314576]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:01:25 localhost dnsmasq[314576]: warning: no upstream servers configured
Nov 23 05:01:25 localhost dnsmasq-dhcp[314576]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:01:25 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 0 addresses
Nov 23 05:01:25 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:25 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:25.793 262721 INFO neutron.agent.dhcp.agent [None req-6a24921f-61c4-4af2-a8ef-f20f04ca5a10 - - - - - -] DHCP configuration for ports {'cb75c435-d3ec-4a7c-891a-c67a16640145'} is completed#033[00m
Nov 23 05:01:25 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:25.871 2 INFO neutron.agent.securitygroups_rpc [None req-2cdcfc76-bcd0-4cc8-b710-4dffd4e83ae1 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:25 localhost nova_compute[281613]: 2025-11-23 10:01:25.921 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:26 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:26.024 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:27 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:27.347 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:28 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:28.333 2 INFO neutron.agent.securitygroups_rpc [None req-ac3982b0-5538-471d-bc5b-e6d4442ddedd fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:29 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:29.645 2 INFO neutron.agent.securitygroups_rpc [None req-efb808b5-6542-451b-803e-d2906345bde2 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:30 localhost nova_compute[281613]: 2025-11-23 10:01:30.548 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:30 localhost nova_compute[281613]: 2025-11-23 10:01:30.923 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:31 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:31.204 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:29Z, description=, device_id=660d13f7-8144-4f7c-8ef3-3f8d8eec152e, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b182b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b18eb0>], id=f9e5f544-8d10-42b1-8f99-d91ba2eec2de, ip_allocation=immediate, mac_address=fa:16:3e:76:57:5e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1558, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:01:30Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:01:31 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 5 addresses
Nov 23 05:01:31 localhost podman[314594]: 2025-11-23 10:01:31.515190917 +0000 UTC m=+0.050449724 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 05:01:31 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:31 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:32.198 262721 INFO neutron.agent.dhcp.agent [None req-2d765eda-02d9-4feb-b066-409a9cfa3cfa - - - - - -] DHCP configuration for ports {'f9e5f544-8d10-42b1-8f99-d91ba2eec2de'} is completed#033[00m
Nov 23 05:01:33 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:01:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:33 localhost podman[314632]: 2025-11-23 10:01:33.008648095 +0000 UTC m=+0.057612970 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:01:33 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:33 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:33.728 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:33 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:33.730 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:33 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:33.733 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:33 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:33.734 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe4cb2d-2523-4611-9d65-d770a0b58451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:34 localhost nova_compute[281613]: 2025-11-23 10:01:34.455 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:35 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:35.202 2 INFO neutron.agent.securitygroups_rpc [None req-9e0f6898-e450-4b10-a9b6-2526799f670d fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:35 localhost nova_compute[281613]: 2025-11-23 10:01:35.576 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:35 localhost nova_compute[281613]: 2025-11-23 10:01:35.925 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:01:37 localhost podman[314672]: 2025-11-23 10:01:37.194853303 +0000 UTC m=+0.102344368 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:01:37 localhost podman[314672]: 2025-11-23 10:01:37.208140546 +0000 UTC m=+0.115631651 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:01:37 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:01:37 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:37.242 2 INFO neutron.agent.securitygroups_rpc [None req-0d1bae38-c3b0-4ed4-ba2a-9b0a25fae4ab 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:37 localhost podman[314670]: 2025-11-23 10:01:37.29323473 +0000 UTC m=+0.206319018 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 05:01:37 localhost podman[314670]: 2025-11-23 10:01:37.335443947 +0000 UTC m=+0.248528205 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 23 05:01:37 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:01:37 localhost podman[314671]: 2025-11-23 10:01:37.347794106 +0000 UTC m=+0.258479498 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 05:01:37 localhost podman[314671]: 2025-11-23 10:01:37.42745983 +0000 UTC m=+0.338145252 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:01:37 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:01:37 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:37.491 2 INFO neutron.agent.securitygroups_rpc [None req-b60c8ac0-a755-4981-a84e-d95a726ca718 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:01:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:01:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:01:38 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:38.886 2 INFO neutron.agent.securitygroups_rpc [None req-e4114a4e-94f5-47df-bc1b-6dc35d238db6 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:38 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:38.941 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:38Z, description=, device_id=660d13f7-8144-4f7c-8ef3-3f8d8eec152e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9e940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790c9ed30>], id=9a7f0190-fb32-4c14-9ce5-846317539294, ip_allocation=immediate, mac_address=fa:16:3e:27:00:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:19Z, description=, dns_domain=, id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1149157187, port_security_enabled=True, project_id=a95d56ceca02400bb048e86377bec83f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4524, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1532, status=ACTIVE, subnets=['02bd77e9-a90f-4ec6-88e7-26255e7683f0'], tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:22Z, vlan_transparent=None, network_id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, port_security_enabled=False, project_id=a95d56ceca02400bb048e86377bec83f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:38Z on network cda26bc7-7b95-4a20-a6d7-3cb1e496426d#033[00m
Nov 23 05:01:39 localhost podman[314817]: 2025-11-23 10:01:39.168496376 +0000 UTC m=+0.059460731 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:01:39 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 1 addresses
Nov 23 05:01:39 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:39 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:39 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:39.638 262721 INFO neutron.agent.dhcp.agent [None req-de750905-2f3a-4a30-8311-1eaa96dcc0e3 - - - - - -] DHCP configuration for ports {'9a7f0190-fb32-4c14-9ce5-846317539294'} is completed#033[00m
Nov 23 05:01:39 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:39.776 2 INFO neutron.agent.securitygroups_rpc [None req-3c7abe8e-704d-455a-8480-13cb405babe1 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m
Nov 23 05:01:39 localhost nova_compute[281613]: 2025-11-23 10:01:39.804 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:39 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:39.820 2 INFO neutron.agent.securitygroups_rpc [None req-3839d769-9cff-43fb-b0be-05026e050e30 fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:39 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:39.994 2 INFO neutron.agent.securitygroups_rpc [None req-3c7abe8e-704d-455a-8480-13cb405babe1 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m
Nov 23 05:01:40 localhost nova_compute[281613]: 2025-11-23 10:01:40.580 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:40 localhost nova_compute[281613]: 2025-11-23 10:01:40.927 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:41.006 2 INFO neutron.agent.securitygroups_rpc [None req-7b7ae10d-5331-41d7-96b9-7aafc7180edd 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m
Nov 23 05:01:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:41.247 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:38Z, description=, device_id=660d13f7-8144-4f7c-8ef3-3f8d8eec152e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b77c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d16820>], id=9a7f0190-fb32-4c14-9ce5-846317539294, ip_allocation=immediate, mac_address=fa:16:3e:27:00:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:19Z, description=, dns_domain=, id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1149157187, port_security_enabled=True, project_id=a95d56ceca02400bb048e86377bec83f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4524, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1532, status=ACTIVE, subnets=['02bd77e9-a90f-4ec6-88e7-26255e7683f0'], tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:22Z, vlan_transparent=None, network_id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, port_security_enabled=False, project_id=a95d56ceca02400bb048e86377bec83f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:38Z on network cda26bc7-7b95-4a20-a6d7-3cb1e496426d#033[00m
Nov 23 05:01:41 localhost podman[240144]: time="2025-11-23T10:01:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:01:41 localhost podman[240144]: @ - - [23/Nov/2025:10:01:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158149 "" "Go-http-client/1.1"
Nov 23 05:01:41 localhost podman[240144]: @ - - [23/Nov/2025:10:01:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19678 "" "Go-http-client/1.1"
Nov 23 05:01:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:41.349 2 INFO neutron.agent.securitygroups_rpc [None req-9181f901-83e9-4198-8a73-d33dcd9ef0fc 8d3ccb2bccdf4a12bb3b492992930601 6de614a4ddfd4f868264e9fc1dee856a - - default default] Security group member updated ['980b5aab-daf0-44c4-8e04-21e80ebf2d43']#033[00m
Nov 23 05:01:41 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 1 addresses
Nov 23 05:01:41 localhost podman[314852]: 2025-11-23 10:01:41.51930917 +0000 UTC m=+0.070880265 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 05:01:41 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:41 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:41.847 262721 INFO neutron.agent.dhcp.agent [None req-8d9610c6-e7aa-4483-8ff3-14a13586fd14 - - - - - -] DHCP configuration for ports {'9a7f0190-fb32-4c14-9ce5-846317539294'} is completed#033[00m
Nov 23 05:01:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:41.897 2 INFO neutron.agent.securitygroups_rpc [None req-7527e00c-fa2d-4f7e-82d1-1aa51478130d fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:42 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:42.730 2 INFO neutron.agent.securitygroups_rpc [None req-02caec08-4fe5-4eae-8f59-0f41df22086a fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:43 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:43.298 2 INFO neutron.agent.securitygroups_rpc [None req-7ad0c56d-3950-498c-8f5a-35533112ee18 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:43.366 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:01:42Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0e5b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d0ea60>], id=b81b363e-2e24-4296-a946-9b2541631cce, ip_allocation=immediate, mac_address=fa:16:3e:82:85:e9, name=tempest-FloatingIPTestJSON-326384585, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:01:19Z, description=, dns_domain=, id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-1149157187, port_security_enabled=True, project_id=a95d56ceca02400bb048e86377bec83f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4524, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1532, status=ACTIVE, subnets=['02bd77e9-a90f-4ec6-88e7-26255e7683f0'], tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:22Z, vlan_transparent=None, network_id=cda26bc7-7b95-4a20-a6d7-3cb1e496426d, port_security_enabled=True, project_id=a95d56ceca02400bb048e86377bec83f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b05f7049-06d0-4552-94d8-97f623373332'], standard_attr_id=1577, status=DOWN, tags=[], tenant_id=a95d56ceca02400bb048e86377bec83f, updated_at=2025-11-23T10:01:42Z on network cda26bc7-7b95-4a20-a6d7-3cb1e496426d#033[00m
Nov 23 05:01:43 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 2 addresses
Nov 23 05:01:43 localhost podman[314891]: 2025-11-23 10:01:43.644988572 +0000 UTC m=+0.066507235 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 23 05:01:43 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:43 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:43 localhost systemd[1]: tmp-crun.tMKg56.mount: Deactivated successfully.
Nov 23 05:01:43 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:43.653 2 INFO neutron.agent.securitygroups_rpc [None req-6a3f8f10-e09b-4d25-8141-9c205b1a054c fcc428367cfa48b088d781e43c36f195 a180dbb035ce42ac9ec3178829ba27ed - - default default] Security group member updated ['ec43b846-c0b6-48e2-bcdb-df3dfa286247']#033[00m
Nov 23 05:01:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:43.908 262721 INFO neutron.agent.dhcp.agent [None req-bb61e44b-fdd5-4501-9079-63d08d9d415c - - - - - -] DHCP configuration for ports {'b81b363e-2e24-4296-a946-9b2541631cce'} is completed#033[00m
Nov 23 05:01:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:44.221 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 10.100.0.2 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:44.223 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:01:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:44.226 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:44 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:44.227 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5e704d-1875-4796-87dc-7c28b687ed3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:45.177 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:45 localhost nova_compute[281613]: 2025-11-23 10:01:45.177 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:45.179 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:01:45 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:45.333 2 INFO neutron.agent.securitygroups_rpc [None req-13ffd066-3677-4907-b0b4-8f11a42c5f7c a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:45 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 1 addresses
Nov 23 05:01:45 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:45 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:45 localhost nova_compute[281613]: 2025-11-23 10:01:45.583 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:45 localhost podman[314929]: 2025-11-23 10:01:45.588654474 +0000 UTC m=+0.065008363 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:01:45 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:45.911 2 INFO neutron.agent.securitygroups_rpc [None req-b9b41fb6-e2e7-4179-8f08-7b97a3dfa0c7 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:45 localhost nova_compute[281613]: 2025-11-23 10:01:45.929 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:46 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:46.284 2 INFO neutron.agent.securitygroups_rpc [None req-f7529e1b-3bf3-41ad-a49e-e39cf58ffefa ca36e3c530cd4996add76add048683eb 461e34582027490ebd34279a384a57b1 - - default default] Security group rule updated ['ce47e028-f950-480c-a113-98c15c008254']#033[00m
Nov 23 05:01:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:01:47 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3232634724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:01:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:01:47 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3232634724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:01:47 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:47.246 2 INFO neutron.agent.securitygroups_rpc [None req-cfd83c6e-245c-4505-a8d2-3c8b7de44cbd 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:01:47 localhost podman[314960]: 2025-11-23 10:01:47.436879769 +0000 UTC m=+0.084762666 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:47 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:47.443 262721 INFO neutron.agent.linux.ip_lib [None req-eaa8f51f-603f-4074-a4ac-9c1fef059326 - - - - - -] Device tap69528ad3-c9 cannot be used as it has no MAC address#033[00m
Nov 23 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:01:47 localhost dnsmasq[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/addn_hosts - 0 addresses
Nov 23 05:01:47 localhost podman[314990]: 2025-11-23 10:01:47.474412648 +0000 UTC m=+0.056645364 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:01:47 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/host
Nov 23 05:01:47 localhost dnsmasq-dhcp[314576]: read /var/lib/neutron/dhcp/cda26bc7-7b95-4a20-a6d7-3cb1e496426d/opts
Nov 23 05:01:47 localhost podman[314960]: 2025-11-23 10:01:47.478192971 +0000 UTC m=+0.126075878 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.519 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:47 localhost kernel: device tap69528ad3-c9 entered promiscuous mode
Nov 23 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:01:47 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:01:47 localhost systemd-udevd[315033]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:01:47 localhost NetworkManager[5990]: <info>  [1763892107.5342] manager: (tap69528ad3-c9): new Generic device (/org/freedesktop/NetworkManager/Devices/25)
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.536 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:47 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00101|binding|INFO|Claiming lport 69528ad3-c9a1-476e-9f4f-24eb413bb599 for this chassis.
Nov 23 05:01:47 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00102|binding|INFO|69528ad3-c9a1-476e-9f4f-24eb413bb599: Claiming unknown
Nov 23 05:01:47 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:47.548 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-40933a3e-5945-492e-9510-237099115dc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40933a3e-5945-492e-9510-237099115dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6de614a4ddfd4f868264e9fc1dee856a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e88cf60-66c0-4507-84f2-cf3fecd1886b, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=69528ad3-c9a1-476e-9f4f-24eb413bb599) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:47 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:47.554 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 69528ad3-c9a1-476e-9f4f-24eb413bb599 in datapath 40933a3e-5945-492e-9510-237099115dc2 bound to our chassis#033[00m
Nov 23 05:01:47 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:47.556 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 40933a3e-5945-492e-9510-237099115dc2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:01:47 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:47.558 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[4c82550e-2756-41a5-9755-65a90f610414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:47 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00103|binding|INFO|Setting lport 69528ad3-c9a1-476e-9f4f-24eb413bb599 ovn-installed in OVS
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.580 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:47 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00104|binding|INFO|Setting lport 69528ad3-c9a1-476e-9f4f-24eb413bb599 up in Southbound
Nov 23 05:01:47 localhost podman[315013]: 2025-11-23 10:01:47.590564023 +0000 UTC m=+0.131284921 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.603520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107603612, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 538, "num_deletes": 257, "total_data_size": 416165, "memory_usage": 427768, "flush_reason": "Manual Compaction"}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107607802, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 271753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21098, "largest_seqno": 21631, "table_properties": {"data_size": 269139, "index_size": 661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6592, "raw_average_key_size": 18, "raw_value_size": 263608, "raw_average_value_size": 740, "num_data_blocks": 30, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892085, "oldest_key_time": 1763892085, "file_creation_time": 1763892107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4302 microseconds, and 1496 cpu microseconds.
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.607845) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 271753 bytes OK
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.607873) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610466) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610490) EVENT_LOG_v1 {"time_micros": 1763892107610484, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.610511) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 412982, prev total WAL file size 413306, number of live WAL files 2.
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.612627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303234' seq:72057594037927935, type:22 .. '6C6F676D0034323737' seq:0, type:0; will stop at (end)
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(265KB)], [33(15MB)]
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107612714, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 16350564, "oldest_snapshot_seqno": -1}
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.627 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:47 localhost podman[315013]: 2025-11-23 10:01:47.626668282 +0000 UTC m=+0.167389170 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:47 localhost podman[314961]: 2025-11-23 10:01:47.54161466 +0000 UTC m=+0.189765154 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:01:47 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.666 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:47 localhost podman[314961]: 2025-11-23 10:01:47.676037076 +0000 UTC m=+0.324187530 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12421 keys, 16030706 bytes, temperature: kUnknown
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107685183, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 16030706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15959947, "index_size": 38594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 336003, "raw_average_key_size": 27, "raw_value_size": 15748168, "raw_average_value_size": 1267, "num_data_blocks": 1448, "num_entries": 12421, "num_filter_entries": 12421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.686185) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 16030706 bytes
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.688027) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 218.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.3 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(119.2) write-amplify(59.0) OK, records in: 12951, records dropped: 530 output_compression: NoCompression
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.688049) EVENT_LOG_v1 {"time_micros": 1763892107688039, "job": 18, "event": "compaction_finished", "compaction_time_micros": 73245, "compaction_time_cpu_micros": 35753, "output_level": 6, "num_output_files": 1, "total_output_size": 16030706, "num_input_records": 12951, "num_output_records": 12421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107688201, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892107690625, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.611083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.690704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.690710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.690711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.690713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:01:47.690715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:01:47 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:01:47 localhost podman[315031]: 2025-11-23 10:01:47.644164512 +0000 UTC m=+0.102986254 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:01:47 localhost podman[315031]: 2025-11-23 10:01:47.722759957 +0000 UTC m=+0.181581709 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:01:47 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:01:47 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00105|binding|INFO|Releasing lport 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa from this chassis (sb_readonly=0)
Nov 23 05:01:47 localhost nova_compute[281613]: 2025-11-23 10:01:47.999 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:48 localhost kernel: device tap9bc2ac88-7c left promiscuous mode
Nov 23 05:01:48 localhost ovn_controller[153786]: 2025-11-23T10:01:47Z|00106|binding|INFO|Setting lport 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa down in Southbound
Nov 23 05:01:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:48.009 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-cda26bc7-7b95-4a20-a6d7-3cb1e496426d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cda26bc7-7b95-4a20-a6d7-3cb1e496426d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a95d56ceca02400bb048e86377bec83f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bff41337-f066-4eee-843d-cfc39b11288b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:48.012 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9bc2ac88-7cf5-4d89-ac3a-b85e53d107fa in datapath cda26bc7-7b95-4a20-a6d7-3cb1e496426d unbound from our chassis#033[00m
Nov 23 05:01:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:48.015 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cda26bc7-7b95-4a20-a6d7-3cb1e496426d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:48.016 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[cb56cd08-25fb-4704-90d2-7525a7925717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:48 localhost nova_compute[281613]: 2025-11-23 10:01:48.018 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:48 localhost podman[315133]: 
Nov 23 05:01:48 localhost podman[315133]: 2025-11-23 10:01:48.609728466 +0000 UTC m=+0.085439723 container create 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:01:48 localhost systemd[1]: Started libpod-conmon-213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5.scope.
Nov 23 05:01:48 localhost podman[315133]: 2025-11-23 10:01:48.559706005 +0000 UTC m=+0.035417292 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:01:48 localhost systemd[1]: Started libcrun container.
Nov 23 05:01:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/529e49ce4025aafabf59e5583a2ca695e95d16ffcc3e4bb3fcdef34aa99e1f34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:01:48 localhost podman[315133]: 2025-11-23 10:01:48.692865915 +0000 UTC m=+0.168577172 container init 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 05:01:48 localhost podman[315133]: 2025-11-23 10:01:48.703056575 +0000 UTC m=+0.178767832 container start 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 23 05:01:48 localhost dnsmasq[315152]: started, version 2.85 cachesize 150
Nov 23 05:01:48 localhost dnsmasq[315152]: DNS service limited to local subnets
Nov 23 05:01:48 localhost dnsmasq[315152]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:01:48 localhost dnsmasq[315152]: warning: no upstream servers configured
Nov 23 05:01:48 localhost dnsmasq-dhcp[315152]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:01:48 localhost dnsmasq[315152]: read /var/lib/neutron/dhcp/40933a3e-5945-492e-9510-237099115dc2/addn_hosts - 0 addresses
Nov 23 05:01:48 localhost dnsmasq-dhcp[315152]: read /var/lib/neutron/dhcp/40933a3e-5945-492e-9510-237099115dc2/host
Nov 23 05:01:48 localhost dnsmasq-dhcp[315152]: read /var/lib/neutron/dhcp/40933a3e-5945-492e-9510-237099115dc2/opts
Nov 23 05:01:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:48.936 262721 INFO neutron.agent.dhcp.agent [None req-6b3e7eb3-d9c7-484a-ba15-c838ea741c45 - - - - - -] DHCP configuration for ports {'8054b526-163b-4f50-9048-7911dc511934'} is completed#033[00m
Nov 23 05:01:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:50 localhost podman[315170]: 2025-11-23 10:01:50.248863827 +0000 UTC m=+0.065460405 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:01:50 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:01:50 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:50 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:50 localhost nova_compute[281613]: 2025-11-23 10:01:50.621 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:50 localhost nova_compute[281613]: 2025-11-23 10:01:50.931 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:50 localhost dnsmasq[314576]: exiting on receipt of SIGTERM
Nov 23 05:01:50 localhost podman[315207]: 2025-11-23 10:01:50.932999786 +0000 UTC m=+0.065941739 container kill 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:01:50 localhost systemd[1]: libpod-9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279.scope: Deactivated successfully.
Nov 23 05:01:51 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:51.004 2 INFO neutron.agent.securitygroups_rpc [None req-fcaf6d85-3067-425b-90fc-65fb17c22c5c 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:51 localhost podman[315221]: 2025-11-23 10:01:51.010692406 +0000 UTC m=+0.057802395 container died 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279-userdata-shm.mount: Deactivated successfully.
Nov 23 05:01:51 localhost podman[315221]: 2025-11-23 10:01:51.054845166 +0000 UTC m=+0.101955115 container cleanup 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 05:01:51 localhost systemd[1]: libpod-conmon-9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279.scope: Deactivated successfully.
Nov 23 05:01:51 localhost podman[315222]: 2025-11-23 10:01:51.134212473 +0000 UTC m=+0.176198012 container remove 9663080b6433c636323308c4b8b2631cdb3b220dba2ca02dc3f510309f5ce279 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cda26bc7-7b95-4a20-a6d7-3cb1e496426d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:01:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:51.168 262721 INFO neutron.agent.dhcp.agent [None req-479768a1-cf5e-4382-a3a0-bed5cb7c5c27 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:51.180 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:01:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:51.273 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:51 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:01:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:51 localhost podman[315267]: 2025-11-23 10:01:51.745863963 +0000 UTC m=+0.059070761 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:01:51 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:51 localhost nova_compute[281613]: 2025-11-23 10:01:51.836 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:51 localhost systemd[1]: var-lib-containers-storage-overlay-a8dcee509f4cc9258c522c75d044d4384291b91169893e548202c2730b904533-merged.mount: Deactivated successfully.
Nov 23 05:01:51 localhost systemd[1]: run-netns-qdhcp\x2dcda26bc7\x2d7b95\x2d4a20\x2da6d7\x2d3cb1e496426d.mount: Deactivated successfully.
Nov 23 05:01:52 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:52.176 2 INFO neutron.agent.securitygroups_rpc [None req-b422b6dc-3b22-4323-bd62-6ed72320b39a 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: ERROR   10:01:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: ERROR   10:01:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: ERROR   10:01:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: ERROR   10:01:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: ERROR   10:01:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:01:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:01:53 localhost dnsmasq[315152]: exiting on receipt of SIGTERM
Nov 23 05:01:53 localhost podman[315304]: 2025-11-23 10:01:53.302904184 +0000 UTC m=+0.065626141 container kill 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:01:53 localhost systemd[1]: libpod-213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5.scope: Deactivated successfully.
Nov 23 05:01:53 localhost podman[315317]: 2025-11-23 10:01:53.384767539 +0000 UTC m=+0.062507975 container died 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:01:53 localhost podman[315317]: 2025-11-23 10:01:53.419772598 +0000 UTC m=+0.097513014 container cleanup 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 23 05:01:53 localhost systemd[1]: libpod-conmon-213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5.scope: Deactivated successfully.
Nov 23 05:01:53 localhost podman[315318]: 2025-11-23 10:01:53.453384799 +0000 UTC m=+0.128236826 container remove 213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40933a3e-5945-492e-9510-237099115dc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:01:53 localhost nova_compute[281613]: 2025-11-23 10:01:53.466 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:53 localhost ovn_controller[153786]: 2025-11-23T10:01:53Z|00107|binding|INFO|Releasing lport 69528ad3-c9a1-476e-9f4f-24eb413bb599 from this chassis (sb_readonly=0)
Nov 23 05:01:53 localhost ovn_controller[153786]: 2025-11-23T10:01:53Z|00108|binding|INFO|Setting lport 69528ad3-c9a1-476e-9f4f-24eb413bb599 down in Southbound
Nov 23 05:01:53 localhost kernel: device tap69528ad3-c9 left promiscuous mode
Nov 23 05:01:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:53.477 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-40933a3e-5945-492e-9510-237099115dc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40933a3e-5945-492e-9510-237099115dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6de614a4ddfd4f868264e9fc1dee856a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e88cf60-66c0-4507-84f2-cf3fecd1886b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=69528ad3-c9a1-476e-9f4f-24eb413bb599) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:01:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:53.480 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 69528ad3-c9a1-476e-9f4f-24eb413bb599 in datapath 40933a3e-5945-492e-9510-237099115dc2 unbound from our chassis#033[00m
Nov 23 05:01:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:53.482 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40933a3e-5945-492e-9510-237099115dc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:01:53 localhost nova_compute[281613]: 2025-11-23 10:01:53.484 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:01:53.483 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[0abddab3-39c3-45f5-a75c-a8cede732653]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:01:53 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:53.534 2 INFO neutron.agent.securitygroups_rpc [None req-76b8a8df-4b24-4290-be38-6011d037c5af a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:53.591 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:53.958 262721 INFO neutron.agent.dhcp.agent [None req-5a89f0eb-c35d-404b-b35a-b20cf5baa65e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:54 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:54.007 2 INFO neutron.agent.securitygroups_rpc [None req-988e37e0-0049-4c8c-be99-30016558f502 a02463dab01b4a318fbc9bb3ebbc0c3f a95d56ceca02400bb048e86377bec83f - - default default] Security group member updated ['b05f7049-06d0-4552-94d8-97f623373332']#033[00m
Nov 23 05:01:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:54.023 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:54 localhost systemd[1]: var-lib-containers-storage-overlay-529e49ce4025aafabf59e5583a2ca695e95d16ffcc3e4bb3fcdef34aa99e1f34-merged.mount: Deactivated successfully.
Nov 23 05:01:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-213bfece663aa951aa3bb0426b119a31020d1b7c7247d4cadf5184439ac9dbe5-userdata-shm.mount: Deactivated successfully.
Nov 23 05:01:54 localhost systemd[1]: run-netns-qdhcp\x2d40933a3e\x2d5945\x2d492e\x2d9510\x2d237099115dc2.mount: Deactivated successfully.
Nov 23 05:01:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:01:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:55.144 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:55.384 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:55 localhost nova_compute[281613]: 2025-11-23 10:01:55.624 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:55 localhost nova_compute[281613]: 2025-11-23 10:01:55.935 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:56.296 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:56.523 2 INFO neutron.agent.securitygroups_rpc [None req-dca3126a-73f1-4b2f-a026-193f4196c9b1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:56 localhost nova_compute[281613]: 2025-11-23 10:01:56.524 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:01:56 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:01:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:01:56 localhost podman[315363]: 2025-11-23 10:01:56.91460072 +0000 UTC m=+0.074586597 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:01:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:01:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:56.956 2 INFO neutron.agent.securitygroups_rpc [None req-7d98cf13-9412-4d6e-887c-62597fa6d091 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m
Nov 23 05:01:57 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:57.412 2 INFO neutron.agent.securitygroups_rpc [None req-bcaf3ac0-9e11-402c-a86f-10a7b58b202d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:01:57 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:57.846 2 INFO neutron.agent.securitygroups_rpc [None req-640caa93-9dd6-4d73-895a-6c48cb53e831 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m
Nov 23 05:01:58 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:01:58.005 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:01:58 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:58.472 2 INFO neutron.agent.securitygroups_rpc [None req-00590570-c613-4100-977f-94a0fcdda7a2 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m
Nov 23 05:01:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:01:59.652 2 INFO neutron.agent.securitygroups_rpc [None req-a72c44cb-60bc-4c8a-a77d-8d03ce0529ac 2a1728c536894f859fec3b140f01d4cc fb9791d358174b77957d83c427c41282 - - default default] Security group member updated ['c43d5a51-ed13-44c5-81c1-fc0ae615d5d7']#033[00m
Nov 23 05:01:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:02:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3910451066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:02:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:02:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3910451066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:02:00 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:00.276 2 INFO neutron.agent.securitygroups_rpc [None req-b6b7fea1-91f0-4eed-a7d1-b3a653a6eb31 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:00 localhost nova_compute[281613]: 2025-11-23 10:02:00.628 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:00 localhost nova_compute[281613]: 2025-11-23 10:02:00.936 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:01 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:01.019 2 INFO neutron.agent.securitygroups_rpc [None req-632d2016-3c1b-4f91-9e80-c737b7a909f7 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:02:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 9597 writes, 38K keys, 9597 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9597 writes, 2407 syncs, 3.99 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3824 writes, 13K keys, 3824 commit groups, 1.0 writes per commit group, ingest: 14.85 MB, 0.02 MB/s#012Interval WAL: 3824 writes, 1626 syncs, 2.35 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:02:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:02.371 2 INFO neutron.agent.securitygroups_rpc [None req-daacf7a5-9b59-487b-876d-20ffffe4895d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:02.804 2 INFO neutron.agent.securitygroups_rpc [None req-7b96091d-4d68-47d5-ad40-bc5e46f787b5 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:02:02 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3963313061' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:02:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:02:02 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3963313061' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:02:04 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:04.471 2 INFO neutron.agent.securitygroups_rpc [None req-69ba403f-ba50-44e1-b8d0-5abce2396fa5 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:05 localhost nova_compute[281613]: 2025-11-23 10:02:05.630 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:05 localhost nova_compute[281613]: 2025-11-23 10:02:05.938 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:02:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8101 writes, 33K keys, 8101 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8101 writes, 2072 syncs, 3.91 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3030 writes, 10K keys, 3030 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s#012Interval WAL: 3030 writes, 1321 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:02:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:06.454 2 INFO neutron.agent.securitygroups_rpc [None req-027c2ba7-b3e9-44a3-b67a-0b2bdc3ad43a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:06.588 2 INFO neutron.agent.securitygroups_rpc [None req-027c2ba7-b3e9-44a3-b67a-0b2bdc3ad43a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:07.090 2 INFO neutron.agent.securitygroups_rpc [None req-ed1fae3b-c78a-4940-9647-1237b664bff1 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:07.245 2 INFO neutron.agent.securitygroups_rpc [None req-57d318cc-de08-4b5e-a1a2-97e0506818c3 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:07.973 2 INFO neutron.agent.securitygroups_rpc [None req-fccb08e0-0b04-4e9e-9f46-388ae08a5315 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:02:08 localhost systemd[1]: tmp-crun.Iuao0T.mount: Deactivated successfully.
Nov 23 05:02:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:08.218 2 INFO neutron.agent.securitygroups_rpc [None req-ccd00d64-5365-45b4-a98a-49c5095f8557 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:08 localhost podman[315385]: 2025-11-23 10:02:08.256024229 +0000 UTC m=+0.152504392 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:02:08 localhost podman[315386]: 2025-11-23 10:02:08.26002819 +0000 UTC m=+0.150020095 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:02:08 localhost podman[315384]: 2025-11-23 10:02:08.217448532 +0000 UTC m=+0.114795098 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Nov 23 05:02:08 localhost podman[315386]: 2025-11-23 10:02:08.29397156 +0000 UTC m=+0.183963465 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:02:08 localhost podman[315384]: 2025-11-23 10:02:08.305049444 +0000 UTC m=+0.202396010 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, distribution-scope=public, release=1755695350, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Nov 23 05:02:08 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:02:08 localhost podman[315385]: 2025-11-23 10:02:08.324391984 +0000 UTC m=+0.220872157 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:02:08 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:02:08 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:02:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e116 e116: 6 total, 6 up, 6 in
Nov 23 05:02:09 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:09.127 2 INFO neutron.agent.securitygroups_rpc [None req-4a4cd2de-ce5b-469c-97bb-90c17373d140 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m
Nov 23 05:02:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:09.189 262721 INFO neutron.agent.linux.ip_lib [None req-55802699-5343-457e-a8b9-262dc151c94a - - - - - -] Device tap6b10e464-55 cannot be used as it has no MAC address#033[00m
Nov 23 05:02:09 localhost systemd[1]: tmp-crun.lGscxb.mount: Deactivated successfully.
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.248 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost kernel: device tap6b10e464-55 entered promiscuous mode
Nov 23 05:02:09 localhost NetworkManager[5990]: <info>  [1763892129.2592] manager: (tap6b10e464-55): new Generic device (/org/freedesktop/NetworkManager/Devices/26)
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.260 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost ovn_controller[153786]: 2025-11-23T10:02:09Z|00109|binding|INFO|Claiming lport 6b10e464-5574-4905-9522-d6f013251048 for this chassis.
Nov 23 05:02:09 localhost ovn_controller[153786]: 2025-11-23T10:02:09Z|00110|binding|INFO|6b10e464-5574-4905-9522-d6f013251048: Claiming unknown
Nov 23 05:02:09 localhost systemd-udevd[315454]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.268 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.269 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.269 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.278 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe97:cd6/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39d5a421-2741-461f-8446-7c8f057988a5, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=6b10e464-5574-4905-9522-d6f013251048) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.280 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 6b10e464-5574-4905-9522-d6f013251048 in datapath c1757f5c-90d0-4a1c-94be-ddf0f3276eb5 bound to our chassis#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.285 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0199ed3a-af85-4aeb-bb35-c7bc62890761 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.286 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:02:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:09.287 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c946ffee-eddb-400b-bb54-7213a411a538]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.294 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost ovn_controller[153786]: 2025-11-23T10:02:09Z|00111|binding|INFO|Setting lport 6b10e464-5574-4905-9522-d6f013251048 ovn-installed in OVS
Nov 23 05:02:09 localhost ovn_controller[153786]: 2025-11-23T10:02:09Z|00112|binding|INFO|Setting lport 6b10e464-5574-4905-9522-d6f013251048 up in Southbound
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.298 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost journal[229736]: ethtool ioctl error on tap6b10e464-55: No such device
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.333 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost nova_compute[281613]: 2025-11-23 10:02:09.370 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:10 localhost podman[315525]: 
Nov 23 05:02:10 localhost podman[315525]: 2025-11-23 10:02:10.258420981 +0000 UTC m=+0.093586056 container create 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:10 localhost systemd[1]: Started libpod-conmon-8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68.scope.
Nov 23 05:02:10 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ebceb229e524f2edbc4919e008d37ff3fa61e249ee54adb9af11bce74365cb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:10 localhost podman[315525]: 2025-11-23 10:02:10.213682905 +0000 UTC m=+0.048848040 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:10 localhost podman[315525]: 2025-11-23 10:02:10.319873906 +0000 UTC m=+0.155038961 container init 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:02:10 localhost podman[315525]: 2025-11-23 10:02:10.334513587 +0000 UTC m=+0.169678682 container start 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:02:10 localhost dnsmasq[315543]: started, version 2.85 cachesize 150
Nov 23 05:02:10 localhost dnsmasq[315543]: DNS service limited to local subnets
Nov 23 05:02:10 localhost dnsmasq[315543]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:10 localhost dnsmasq[315543]: warning: no upstream servers configured
Nov 23 05:02:10 localhost dnsmasq[315543]: read /var/lib/neutron/dhcp/c1757f5c-90d0-4a1c-94be-ddf0f3276eb5/addn_hosts - 0 addresses
Nov 23 05:02:10 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:10.485 262721 INFO neutron.agent.dhcp.agent [None req-f6830b53-8f0a-4116-8727-a85b3d0b78aa - - - - - -] DHCP configuration for ports {'d039f238-4741-4079-bb4e-19223fccf536'} is completed#033[00m
Nov 23 05:02:10 localhost nova_compute[281613]: 2025-11-23 10:02:10.634 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:10 localhost dnsmasq[315543]: exiting on receipt of SIGTERM
Nov 23 05:02:10 localhost podman[315562]: 2025-11-23 10:02:10.707660489 +0000 UTC m=+0.059470242 container kill 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:02:10 localhost systemd[1]: libpod-8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68.scope: Deactivated successfully.
Nov 23 05:02:10 localhost podman[315576]: 2025-11-23 10:02:10.791513368 +0000 UTC m=+0.060931631 container died 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:02:10 localhost podman[315576]: 2025-11-23 10:02:10.888778595 +0000 UTC m=+0.158196828 container remove 8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:02:10 localhost systemd[1]: libpod-conmon-8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68.scope: Deactivated successfully.
Nov 23 05:02:10 localhost ovn_controller[153786]: 2025-11-23T10:02:10Z|00113|binding|INFO|Releasing lport 6b10e464-5574-4905-9522-d6f013251048 from this chassis (sb_readonly=0)
Nov 23 05:02:10 localhost nova_compute[281613]: 2025-11-23 10:02:10.903 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:10 localhost kernel: device tap6b10e464-55 left promiscuous mode
Nov 23 05:02:10 localhost ovn_controller[153786]: 2025-11-23T10:02:10Z|00114|binding|INFO|Setting lport 6b10e464-5574-4905-9522-d6f013251048 down in Southbound
Nov 23 05:02:10 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:10.918 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1757f5c-90d0-4a1c-94be-ddf0f3276eb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23ffb5a89d5d4d8a8900ea750309030f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39d5a421-2741-461f-8446-7c8f057988a5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=6b10e464-5574-4905-9522-d6f013251048) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:10 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:10.921 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 6b10e464-5574-4905-9522-d6f013251048 in datapath c1757f5c-90d0-4a1c-94be-ddf0f3276eb5 unbound from our chassis#033[00m
Nov 23 05:02:10 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:10.924 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1757f5c-90d0-4a1c-94be-ddf0f3276eb5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:02:10 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:10.925 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d59157-013f-4d3e-b3b2-3d0682c9968f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:10 localhost nova_compute[281613]: 2025-11-23 10:02:10.927 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:10 localhost nova_compute[281613]: 2025-11-23 10:02:10.940 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:11.171 262721 INFO neutron.agent.dhcp.agent [None req-7c8b01ba-7a9a-49e0-ba74-bdc6225eaae9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:11 localhost systemd[1]: tmp-crun.hpGPgo.mount: Deactivated successfully.
Nov 23 05:02:11 localhost systemd[1]: var-lib-containers-storage-overlay-7ebceb229e524f2edbc4919e008d37ff3fa61e249ee54adb9af11bce74365cb6-merged.mount: Deactivated successfully.
Nov 23 05:02:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f3a079c6ece8949b48aa7b3c295144318330ed758c73188a8d93eeccbfb4a68-userdata-shm.mount: Deactivated successfully.
Nov 23 05:02:11 localhost systemd[1]: run-netns-qdhcp\x2dc1757f5c\x2d90d0\x2d4a1c\x2d94be\x2dddf0f3276eb5.mount: Deactivated successfully.
Nov 23 05:02:11 localhost podman[240144]: time="2025-11-23T10:02:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:02:11 localhost podman[240144]: @ - - [23/Nov/2025:10:02:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:02:11 localhost podman[240144]: @ - - [23/Nov/2025:10:02:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19206 "" "Go-http-client/1.1"
Nov 23 05:02:11 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:11.770 2 INFO neutron.agent.securitygroups_rpc [None req-714530e0-ede7-43a6-b513-9acc4fe9127a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:12.349 2 INFO neutron.agent.securitygroups_rpc [None req-c5ba658c-cffc-41f4-af3e-933fb394a1b2 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m
Nov 23 05:02:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:12.530 2 INFO neutron.agent.securitygroups_rpc [None req-c5ba658c-cffc-41f4-af3e-933fb394a1b2 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m
Nov 23 05:02:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:12.686 2 INFO neutron.agent.securitygroups_rpc [None req-972382fb-338a-4217-8905-e67aa90103fe 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:13 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:13.198 2 INFO neutron.agent.securitygroups_rpc [None req-e36b687d-978f-4757-a141-a3de7329fae8 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m
Nov 23 05:02:13 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:13.856 2 INFO neutron.agent.securitygroups_rpc [None req-072dc881-3667-4cc2-b265-d09f039c6880 404dc86aa2d547d1b035a15b21cd31a6 3bcc515473444ea195be635c77c65d0f - - default default] Security group member updated ['742a2d87-87e7-4137-bfc5-9bbcf8237faa']#033[00m
Nov 23 05:02:13 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:13.882 2 INFO neutron.agent.securitygroups_rpc [None req-a4d2f133-97c5-4bde-ae86-9705c747c91a 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:13 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:13.887 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:14 localhost nova_compute[281613]: 2025-11-23 10:02:14.313 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:14 localhost nova_compute[281613]: 2025-11-23 10:02:14.314 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:14 localhost nova_compute[281613]: 2025-11-23 10:02:14.314 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:14 localhost nova_compute[281613]: 2025-11-23 10:02:14.314 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:02:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:14.817 2 INFO neutron.agent.securitygroups_rpc [None req-5062901d-b55d-4422-a232-c0dc20b0538f 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m
Nov 23 05:02:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:14.835 2 INFO neutron.agent.securitygroups_rpc [None req-e9c4a6cc-b7de-4f6d-9d17-231261e88eb0 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:15 localhost nova_compute[281613]: 2025-11-23 10:02:15.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:15 localhost nova_compute[281613]: 2025-11-23 10:02:15.637 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:15 localhost nova_compute[281613]: 2025-11-23 10:02:15.940 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:16 localhost nova_compute[281613]: 2025-11-23 10:02:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:16 localhost nova_compute[281613]: 2025-11-23 10:02:16.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:02:16 localhost nova_compute[281613]: 2025-11-23 10:02:16.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:02:16 localhost nova_compute[281613]: 2025-11-23 10:02:16.035 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:02:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e117 e117: 6 total, 6 up, 6 in
Nov 23 05:02:18 localhost nova_compute[281613]: 2025-11-23 10:02:18.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:02:18 localhost systemd[1]: tmp-crun.TTXwJs.mount: Deactivated successfully.
Nov 23 05:02:18 localhost podman[315606]: 2025-11-23 10:02:18.192468648 +0000 UTC m=+0.095786648 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 23 05:02:18 localhost podman[315606]: 2025-11-23 10:02:18.223604242 +0000 UTC m=+0.126922242 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:18 localhost systemd[1]: tmp-crun.77s5YP.mount: Deactivated successfully.
Nov 23 05:02:18 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:02:18 localhost podman[315608]: 2025-11-23 10:02:18.247390724 +0000 UTC m=+0.142975131 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:02:18 localhost podman[315608]: 2025-11-23 10:02:18.283893055 +0000 UTC m=+0.179477412 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:02:18 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:02:18 localhost podman[315607]: 2025-11-23 10:02:18.30304467 +0000 UTC m=+0.204142078 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:18 localhost podman[315607]: 2025-11-23 10:02:18.314748951 +0000 UTC m=+0.215846379 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 05:02:18 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:02:18 localhost podman[315609]: 2025-11-23 10:02:18.407031791 +0000 UTC m=+0.297632961 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Nov 23 05:02:18 localhost podman[315609]: 2025-11-23 10:02:18.455088558 +0000 UTC m=+0.345689718 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:02:18 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:02:18 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:18.758 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:02:18Z, description=, device_id=6c96453a-c777-4431-b133-5c4197796c3e, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b1f6d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b1f1f0>], id=57f4213d-c7b8-44c8-ae9f-851e6026ccc8, ip_allocation=immediate, mac_address=fa:16:3e:eb:2d:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1838, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:02:18Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:02:19 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:02:19 localhost podman[315704]: 2025-11-23 10:02:19.012819491 +0000 UTC m=+0.063019018 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:02:19 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:02:19 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.047 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.048 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.048 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.049 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.049 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:02:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e118 e118: 6 total, 6 up, 6 in
Nov 23 05:02:19 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:19.271 262721 INFO neutron.agent.dhcp.agent [None req-16fcfb66-16c5-4356-b99f-6a7951b70f1b - - - - - -] DHCP configuration for ports {'57f4213d-c7b8-44c8-ae9f-851e6026ccc8'} is completed#033[00m
Nov 23 05:02:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:02:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/840348542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.522 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.667 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.750 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.751 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11685MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.751 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.752 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.812 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.812 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:02:19 localhost nova_compute[281613]: 2025-11-23 10:02:19.837 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:02:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:02:20 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4067616315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.324 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.332 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.367 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.370 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.370 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:02:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e119 e119: 6 total, 6 up, 6 in
Nov 23 05:02:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:20.420 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8:0:1:f816:3eff:feec:8f43'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0cb3a0b8-b546-4270-a81e-4e29b4aaeadf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e41a192c-bec5-4e3b-8388-8af6ab7114b5) old=Port_Binding(mac=['fa:16:3e:ec:8f:43 2001:db8::f816:3eff:feec:8f43'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:8f43/64', 'neutron:device_id': 'ovnmeta-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db04e65-3a65-4ecd-9d7c-1b518aa8c237', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a12890982534f9f8dfb103ad294ca1f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:20.423 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e41a192c-bec5-4e3b-8388-8af6ab7114b5 in datapath 6db04e65-3a65-4ecd-9d7c-1b518aa8c237 updated#033[00m
Nov 23 05:02:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:20.426 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db04e65-3a65-4ecd-9d7c-1b518aa8c237, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:02:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:20.427 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[2416f257-556b-402f-99dc-4ee04eafe63a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.641 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:02:20 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2008208189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:02:20 localhost nova_compute[281613]: 2025-11-23 10:02:20.942 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:20 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:20.956 2 INFO neutron.agent.securitygroups_rpc [None req-da0f81f0-068a-49d7-b6f1-50fa28f5c3fa 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:21.364 262721 INFO neutron.agent.linux.ip_lib [None req-8e77b032-a092-4d5c-b090-0ca1516d2858 - - - - - -] Device tap5c4574e7-6d cannot be used as it has no MAC address#033[00m
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.367 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.391 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:21 localhost kernel: device tap5c4574e7-6d entered promiscuous mode
Nov 23 05:02:21 localhost NetworkManager[5990]: <info>  [1763892141.4012] manager: (tap5c4574e7-6d): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.402 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:21 localhost ovn_controller[153786]: 2025-11-23T10:02:21Z|00115|binding|INFO|Claiming lport 5c4574e7-6dd1-48ce-baac-2edfa73f9401 for this chassis.
Nov 23 05:02:21 localhost ovn_controller[153786]: 2025-11-23T10:02:21Z|00116|binding|INFO|5c4574e7-6dd1-48ce-baac-2edfa73f9401: Claiming unknown
Nov 23 05:02:21 localhost systemd-udevd[315779]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:21 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:21.415 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-33e8d62e-2618-4794-841b-00eb785bda87', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33e8d62e-2618-4794-841b-00eb785bda87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bcc515473444ea195be635c77c65d0f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f9f57a7-50da-4f3e-a4ae-a8f4bbe4a1f8, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5c4574e7-6dd1-48ce-baac-2edfa73f9401) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:21 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:21.418 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5c4574e7-6dd1-48ce-baac-2edfa73f9401 in datapath 33e8d62e-2618-4794-841b-00eb785bda87 bound to our chassis#033[00m
Nov 23 05:02:21 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:21.420 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 33e8d62e-2618-4794-841b-00eb785bda87 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:21 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:21.423 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ba171670-003e-415e-b6be-681e68c3ef63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost ovn_controller[153786]: 2025-11-23T10:02:21Z|00117|binding|INFO|Setting lport 5c4574e7-6dd1-48ce-baac-2edfa73f9401 ovn-installed in OVS
Nov 23 05:02:21 localhost ovn_controller[153786]: 2025-11-23T10:02:21Z|00118|binding|INFO|Setting lport 5c4574e7-6dd1-48ce-baac-2edfa73f9401 up in Southbound
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.448 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:21.467 2 INFO neutron.agent.securitygroups_rpc [None req-2c014fe0-b7cd-43b0-aa6e-452db82dfd05 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost journal[229736]: ethtool ioctl error on tap5c4574e7-6d: No such device
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.493 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:21 localhost nova_compute[281613]: 2025-11-23 10:02:21.527 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:21 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:21.887 2 INFO neutron.agent.securitygroups_rpc [None req-7716ba44-8c0f-4db3-b436-de264dd9940d 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:22 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:22.256 2 INFO neutron.agent.securitygroups_rpc [None req-2377c616-101f-418f-a8e9-4c617ed1b658 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: ERROR   10:02:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: ERROR   10:02:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: ERROR   10:02:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: ERROR   10:02:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: ERROR   10:02:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:02:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:02:22 localhost podman[315850]: 
Nov 23 05:02:22 localhost podman[315850]: 2025-11-23 10:02:22.391151757 +0000 UTC m=+0.097960307 container create 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:02:22 localhost systemd[1]: Started libpod-conmon-510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217.scope.
Nov 23 05:02:22 localhost podman[315850]: 2025-11-23 10:02:22.346921784 +0000 UTC m=+0.053730364 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:22 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d980acdbad94522de97db4315250f66f4902b93355d5291bbcbcd45968779b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:22 localhost podman[315850]: 2025-11-23 10:02:22.470348488 +0000 UTC m=+0.177157048 container init 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 23 05:02:22 localhost podman[315850]: 2025-11-23 10:02:22.485133184 +0000 UTC m=+0.191941734 container start 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:02:22 localhost dnsmasq[315869]: started, version 2.85 cachesize 150
Nov 23 05:02:22 localhost dnsmasq[315869]: DNS service limited to local subnets
Nov 23 05:02:22 localhost dnsmasq[315869]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:22 localhost dnsmasq[315869]: warning: no upstream servers configured
Nov 23 05:02:22 localhost dnsmasq-dhcp[315869]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:02:22 localhost dnsmasq[315869]: read /var/lib/neutron/dhcp/33e8d62e-2618-4794-841b-00eb785bda87/addn_hosts - 0 addresses
Nov 23 05:02:22 localhost dnsmasq-dhcp[315869]: read /var/lib/neutron/dhcp/33e8d62e-2618-4794-841b-00eb785bda87/host
Nov 23 05:02:22 localhost dnsmasq-dhcp[315869]: read /var/lib/neutron/dhcp/33e8d62e-2618-4794-841b-00eb785bda87/opts
Nov 23 05:02:22 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:22.643 262721 INFO neutron.agent.dhcp.agent [None req-52d48410-c96a-4d9a-8f42-1b73889414a6 - - - - - -] DHCP configuration for ports {'45179caa-9c6d-48db-8cd2-d2217edbc9a4'} is completed#033[00m
Nov 23 05:02:22 localhost dnsmasq[315869]: exiting on receipt of SIGTERM
Nov 23 05:02:22 localhost podman[315887]: 2025-11-23 10:02:22.775518826 +0000 UTC m=+0.063555753 container kill 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:02:22 localhost systemd[1]: libpod-510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217.scope: Deactivated successfully.
Nov 23 05:02:22 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:22.843 2 INFO neutron.agent.securitygroups_rpc [None req-5aab4fba-63d5-4959-8f5e-411b7878d60b 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:22 localhost podman[315901]: 2025-11-23 10:02:22.857255138 +0000 UTC m=+0.059684529 container died 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:22 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:22.942 2 INFO neutron.agent.securitygroups_rpc [None req-7111b2ef-5dc8-48f5-9c8c-9b4c6e4004a1 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:22 localhost podman[315901]: 2025-11-23 10:02:22.955801709 +0000 UTC m=+0.158231110 container remove 510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-33e8d62e-2618-4794-841b-00eb785bda87, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:02:22 localhost systemd[1]: libpod-conmon-510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217.scope: Deactivated successfully.
Nov 23 05:02:22 localhost nova_compute[281613]: 2025-11-23 10:02:22.969 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:22 localhost ovn_controller[153786]: 2025-11-23T10:02:22Z|00119|binding|INFO|Releasing lport 5c4574e7-6dd1-48ce-baac-2edfa73f9401 from this chassis (sb_readonly=0)
Nov 23 05:02:22 localhost kernel: device tap5c4574e7-6d left promiscuous mode
Nov 23 05:02:22 localhost ovn_controller[153786]: 2025-11-23T10:02:22Z|00120|binding|INFO|Setting lport 5c4574e7-6dd1-48ce-baac-2edfa73f9401 down in Southbound
Nov 23 05:02:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:22.980 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-33e8d62e-2618-4794-841b-00eb785bda87', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33e8d62e-2618-4794-841b-00eb785bda87', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bcc515473444ea195be635c77c65d0f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f9f57a7-50da-4f3e-a4ae-a8f4bbe4a1f8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=5c4574e7-6dd1-48ce-baac-2edfa73f9401) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:22.982 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 5c4574e7-6dd1-48ce-baac-2edfa73f9401 in datapath 33e8d62e-2618-4794-841b-00eb785bda87 unbound from our chassis#033[00m
Nov 23 05:02:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:22.984 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 33e8d62e-2618-4794-841b-00eb785bda87 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:22.984 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[af4464ff-3ea3-461b-beb3-293be5f0d32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:22 localhost nova_compute[281613]: 2025-11-23 10:02:22.990 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:23.228 262721 INFO neutron.agent.dhcp.agent [None req-652f6532-ca08-4c15-9f39-ac9a5073582c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:23 localhost systemd[1]: var-lib-containers-storage-overlay-01d980acdbad94522de97db4315250f66f4902b93355d5291bbcbcd45968779b-merged.mount: Deactivated successfully.
Nov 23 05:02:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-510f6a65178b0464741fe38a11c62db1a33f7056a5c2ce9874cc97b484d5e217-userdata-shm.mount: Deactivated successfully.
Nov 23 05:02:23 localhost systemd[1]: run-netns-qdhcp\x2d33e8d62e\x2d2618\x2d4794\x2d841b\x2d00eb785bda87.mount: Deactivated successfully.
Nov 23 05:02:23 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:23.597 2 INFO neutron.agent.securitygroups_rpc [None req-192c090e-5c5a-4cbc-acb5-d1edd0c7e4bb 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:24.325 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:24 localhost nova_compute[281613]: 2025-11-23 10:02:24.540 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e120 e120: 6 total, 6 up, 6 in
Nov 23 05:02:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:25.318 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:25 localhost nova_compute[281613]: 2025-11-23 10:02:25.601 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:25 localhost nova_compute[281613]: 2025-11-23 10:02:25.644 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:25 localhost nova_compute[281613]: 2025-11-23 10:02:25.944 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:26 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e121 e121: 6 total, 6 up, 6 in
Nov 23 05:02:26 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:26.334 2 INFO neutron.agent.securitygroups_rpc [None req-d166c481-1e29-4957-bae4-8d46f816d4e6 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e122 e122: 6 total, 6 up, 6 in
Nov 23 05:02:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e123 e123: 6 total, 6 up, 6 in
Nov 23 05:02:28 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:28.409 2 INFO neutron.agent.securitygroups_rpc [None req-accb9679-f798-499a-bac7-ef6b44f5ac25 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:28 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:28.910 2 INFO neutron.agent.securitygroups_rpc [None req-cba9b202-7a98-4e4d-911c-f57572c47e81 173936a8ad3f4f56a3e8a901d82c6886 2a12890982534f9f8dfb103ad294ca1f - - default default] Security group member updated ['23b6f795-fce0-46ba-a7cc-e8d055195822']#033[00m
Nov 23 05:02:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e124 e124: 6 total, 6 up, 6 in
Nov 23 05:02:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:29 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:29.882 2 INFO neutron.agent.securitygroups_rpc [None req-71451af8-2b84-4ee8-885e-297c3854d4d1 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e125 e125: 6 total, 6 up, 6 in
Nov 23 05:02:30 localhost nova_compute[281613]: 2025-11-23 10:02:30.648 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:30 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:30.782 2 INFO neutron.agent.securitygroups_rpc [None req-1ba66ce5-b42f-4fad-9876-b27f14456f6a 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:30 localhost nova_compute[281613]: 2025-11-23 10:02:30.946 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:31 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:31.998 2 INFO neutron.agent.securitygroups_rpc [None req-fe8c6fcc-6e02-46f0-a4be-4363defcaae3 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:32 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:32.105 2 INFO neutron.agent.securitygroups_rpc [None req-5d5ccae7-720c-44b0-bb02-9faf01da722a 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m
Nov 23 05:02:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e126 e126: 6 total, 6 up, 6 in
Nov 23 05:02:32 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:32.687 2 INFO neutron.agent.securitygroups_rpc [None req-07562697-3af0-478b-ab22-4ac8b6836e01 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:02:32 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/966109860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:02:33 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:33.354 2 INFO neutron.agent.securitygroups_rpc [None req-f8a04e39-0876-44f3-a37b-8d86b234eb13 268d02d4288b4ff3a9dab77419bcf96a 23ffb5a89d5d4d8a8900ea750309030f - - default default] Security group member updated ['8eb14703-b106-4f91-b864-8b16a806bee3']#033[00m
Nov 23 05:02:33 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:33.390 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e127 e127: 6 total, 6 up, 6 in
Nov 23 05:02:34 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:34.518 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e128 e128: 6 total, 6 up, 6 in
Nov 23 05:02:35 localhost nova_compute[281613]: 2025-11-23 10:02:35.652 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:35 localhost nova_compute[281613]: 2025-11-23 10:02:35.962 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e129 e129: 6 total, 6 up, 6 in
Nov 23 05:02:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:02:37 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4040804523' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:02:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:02:37 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4040804523' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:02:37 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:37.529 262721 INFO neutron.agent.linux.ip_lib [None req-cbd22bd7-34dc-4be8-9677-aea9509c992b - - - - - -] Device tapbb611f90-cf cannot be used as it has no MAC address#033[00m
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.583 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost kernel: device tapbb611f90-cf entered promiscuous mode
Nov 23 05:02:37 localhost NetworkManager[5990]: <info>  [1763892157.5915] manager: (tapbb611f90-cf): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.591 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost ovn_controller[153786]: 2025-11-23T10:02:37Z|00121|binding|INFO|Claiming lport bb611f90-cf15-45bf-a4cc-72a9ddb25347 for this chassis.
Nov 23 05:02:37 localhost ovn_controller[153786]: 2025-11-23T10:02:37Z|00122|binding|INFO|bb611f90-cf15-45bf-a4cc-72a9ddb25347: Claiming unknown
Nov 23 05:02:37 localhost systemd-udevd[315940]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:37 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:37.607 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-557770b4-85f4-49d5-ab42-fb74604fac60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-557770b4-85f4-49d5-ab42-fb74604fac60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f04eb9-3fb3-488a-9149-75a9a8422214, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=bb611f90-cf15-45bf-a4cc-72a9ddb25347) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:37 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:37.610 159429 INFO neutron.agent.ovn.metadata.agent [-] Port bb611f90-cf15-45bf-a4cc-72a9ddb25347 in datapath 557770b4-85f4-49d5-ab42-fb74604fac60 bound to our chassis#033[00m
Nov 23 05:02:37 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:37.611 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 557770b4-85f4-49d5-ab42-fb74604fac60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:37 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:37.612 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc1c56b-9a36-4558-bf11-2645bfc029de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost ovn_controller[153786]: 2025-11-23T10:02:37Z|00123|binding|INFO|Setting lport bb611f90-cf15-45bf-a4cc-72a9ddb25347 ovn-installed in OVS
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.637 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost ovn_controller[153786]: 2025-11-23T10:02:37Z|00124|binding|INFO|Setting lport bb611f90-cf15-45bf-a4cc-72a9ddb25347 up in Southbound
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.640 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost journal[229736]: ethtool ioctl error on tapbb611f90-cf: No such device
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.676 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost nova_compute[281613]: 2025-11-23 10:02:37.706 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e130 e130: 6 total, 6 up, 6 in
Nov 23 05:02:38 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:38.082 2 INFO neutron.agent.securitygroups_rpc [None req-5e89ea53-7b2c-469d-be9d-51deeecc82b0 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:02:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:02:38 localhost ovn_controller[153786]: 2025-11-23T10:02:38Z|00125|binding|INFO|Removing iface tapbb611f90-cf ovn-installed in OVS
Nov 23 05:02:38 localhost ovn_controller[153786]: 2025-11-23T10:02:38Z|00126|binding|INFO|Removing lport bb611f90-cf15-45bf-a4cc-72a9ddb25347 ovn-installed in OVS
Nov 23 05:02:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:38.430 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d2ff87a3-fae3-42df-b878-bfd50535a0f0 with type ""#033[00m
Nov 23 05:02:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:38.432 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-557770b4-85f4-49d5-ab42-fb74604fac60', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-557770b4-85f4-49d5-ab42-fb74604fac60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f04eb9-3fb3-488a-9149-75a9a8422214, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=bb611f90-cf15-45bf-a4cc-72a9ddb25347) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:38.434 159429 INFO neutron.agent.ovn.metadata.agent [-] Port bb611f90-cf15-45bf-a4cc-72a9ddb25347 in datapath 557770b4-85f4-49d5-ab42-fb74604fac60 unbound from our chassis#033[00m
Nov 23 05:02:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:38.435 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 557770b4-85f4-49d5-ab42-fb74604fac60 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:38.435 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[60b636b2-8494-4769-a688-496524f5320f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:38 localhost nova_compute[281613]: 2025-11-23 10:02:38.434 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:38 localhost nova_compute[281613]: 2025-11-23 10:02:38.442 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:38 localhost podman[316007]: 2025-11-23 10:02:38.496599328 +0000 UTC m=+0.118851679 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:02:38 localhost podman[316007]: 2025-11-23 10:02:38.579970004 +0000 UTC m=+0.202222415 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 05:02:38 localhost podman[316008]: 2025-11-23 10:02:38.591898661 +0000 UTC m=+0.205060863 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:02:38 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:02:38 localhost podman[316006]: 2025-11-23 10:02:38.63966484 +0000 UTC m=+0.264320597 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Nov 23 05:02:38 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:38.652 2 INFO neutron.agent.securitygroups_rpc [None req-64ec2b21-8187-4ac4-a9ab-6a7a717b7a77 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:38 localhost podman[316008]: 2025-11-23 10:02:38.666588819 +0000 UTC m=+0.279751011 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:02:38 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:02:38 localhost podman[316006]: 2025-11-23 10:02:38.680670655 +0000 UTC m=+0.305326442 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 05:02:38 localhost podman[316087]: 
Nov 23 05:02:38 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:02:38 localhost podman[316087]: 2025-11-23 10:02:38.706418721 +0000 UTC m=+0.157832888 container create 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 05:02:38 localhost systemd[1]: Started libpod-conmon-2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488.scope.
Nov 23 05:02:38 localhost podman[316087]: 2025-11-23 10:02:38.649733987 +0000 UTC m=+0.101148154 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:38 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13b684d8b85f7d988dad4e24bcf606eee49c41ce94a397c440b81a4c773d79f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:38 localhost podman[316087]: 2025-11-23 10:02:38.782727603 +0000 UTC m=+0.234141760 container init 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:02:38 localhost podman[316087]: 2025-11-23 10:02:38.792288856 +0000 UTC m=+0.243703013 container start 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:02:38 localhost dnsmasq[316126]: started, version 2.85 cachesize 150
Nov 23 05:02:38 localhost dnsmasq[316126]: DNS service limited to local subnets
Nov 23 05:02:38 localhost dnsmasq[316126]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:38 localhost dnsmasq[316126]: warning: no upstream servers configured
Nov 23 05:02:38 localhost dnsmasq-dhcp[316126]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:02:38 localhost dnsmasq[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/addn_hosts - 0 addresses
Nov 23 05:02:38 localhost dnsmasq-dhcp[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/host
Nov 23 05:02:38 localhost dnsmasq-dhcp[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/opts
Nov 23 05:02:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e131 e131: 6 total, 6 up, 6 in
Nov 23 05:02:39 localhost nova_compute[281613]: 2025-11-23 10:02:39.180 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:39 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:39.310 262721 INFO neutron.agent.dhcp.agent [None req-41f3781c-9010-43c9-a20e-5e30a4341b9c - - - - - -] DHCP configuration for ports {'016ee94d-536a-4ce4-9194-d0d9b22ea172'} is completed#033[00m
Nov 23 05:02:39 localhost dnsmasq[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/addn_hosts - 0 addresses
Nov 23 05:02:39 localhost dnsmasq-dhcp[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/host
Nov 23 05:02:39 localhost dnsmasq-dhcp[316126]: read /var/lib/neutron/dhcp/557770b4-85f4-49d5-ab42-fb74604fac60/opts
Nov 23 05:02:39 localhost podman[316175]: 2025-11-23 10:02:39.506231451 +0000 UTC m=+0.059253456 container kill 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:02:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:39 localhost dnsmasq[316126]: exiting on receipt of SIGTERM
Nov 23 05:02:39 localhost podman[316231]: 2025-11-23 10:02:39.896343627 +0000 UTC m=+0.078968116 container kill 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 05:02:39 localhost systemd[1]: tmp-crun.VwDvwF.mount: Deactivated successfully.
Nov 23 05:02:39 localhost systemd[1]: libpod-2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488.scope: Deactivated successfully.
Nov 23 05:02:39 localhost podman[316246]: 2025-11-23 10:02:39.992563444 +0000 UTC m=+0.070588896 container died 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 05:02:40 localhost podman[316246]: 2025-11-23 10:02:40.110416986 +0000 UTC m=+0.188442408 container remove 2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-557770b4-85f4-49d5-ab42-fb74604fac60, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:02:40 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:02:40 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:02:40 localhost systemd[1]: libpod-conmon-2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488.scope: Deactivated successfully.
Nov 23 05:02:40 localhost kernel: device tapbb611f90-cf left promiscuous mode
Nov 23 05:02:40 localhost nova_compute[281613]: 2025-11-23 10:02:40.132 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:40 localhost nova_compute[281613]: 2025-11-23 10:02:40.145 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:40 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:40.167 262721 INFO neutron.agent.dhcp.agent [None req-5267d3b8-bffd-478b-8b40-94f01cd24f6b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:40 localhost systemd[1]: tmp-crun.JrRLih.mount: Deactivated successfully.
Nov 23 05:02:40 localhost systemd[1]: var-lib-containers-storage-overlay-b13b684d8b85f7d988dad4e24bcf606eee49c41ce94a397c440b81a4c773d79f-merged.mount: Deactivated successfully.
Nov 23 05:02:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c75a725af7b0483b7066809f303ee7105151896f174d8fe36e4c6b068587488-userdata-shm.mount: Deactivated successfully.
Nov 23 05:02:40 localhost systemd[1]: run-netns-qdhcp\x2d557770b4\x2d85f4\x2d49d5\x2dab42\x2dfb74604fac60.mount: Deactivated successfully.
Nov 23 05:02:40 localhost nova_compute[281613]: 2025-11-23 10:02:40.652 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:40 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:40.881 2 INFO neutron.agent.securitygroups_rpc [None req-abfc9448-e887-481e-8dd9-d14e7060c10b 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:40 localhost nova_compute[281613]: 2025-11-23 10:02:40.963 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:41 localhost podman[240144]: time="2025-11-23T10:02:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:02:41 localhost podman[240144]: @ - - [23/Nov/2025:10:02:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:02:41 localhost podman[240144]: @ - - [23/Nov/2025:10:02:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19211 "" "Go-http-client/1.1"
Nov 23 05:02:41 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e132 e132: 6 total, 6 up, 6 in
Nov 23 05:02:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:41.814 2 INFO neutron.agent.securitygroups_rpc [None req-1e146831-4c05-4284-9c5f-ec90349e3dfc 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:42 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:42.926 2 INFO neutron.agent.securitygroups_rpc [None req-1cdbc8e1-ca0b-46de-ad57-ff34c29d4e3c fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['7d97459a-8496-4621-8cc6-1521c3f526b4']#033[00m
Nov 23 05:02:42 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e133 e133: 6 total, 6 up, 6 in
Nov 23 05:02:43 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:43.334 2 INFO neutron.agent.securitygroups_rpc [None req-9fd2c3cf-c790-41eb-bd9f-c36dc00051cd fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['7d97459a-8496-4621-8cc6-1521c3f526b4']#033[00m
Nov 23 05:02:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:02:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:44 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:44.909 2 INFO neutron.agent.securitygroups_rpc [None req-c88be53d-2b25-4bbf-93ce-69a72dd56cf0 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:44.961 262721 INFO neutron.agent.linux.ip_lib [None req-355149d3-ea87-4623-9d08-e525596ca4ee - - - - - -] Device tap537283dc-3b cannot be used as it has no MAC address#033[00m
Nov 23 05:02:44 localhost nova_compute[281613]: 2025-11-23 10:02:44.992 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost kernel: device tap537283dc-3b entered promiscuous mode
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.003 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost ovn_controller[153786]: 2025-11-23T10:02:45Z|00127|binding|INFO|Claiming lport 537283dc-3bf2-449b-bdff-690ff3ce6572 for this chassis.
Nov 23 05:02:45 localhost ovn_controller[153786]: 2025-11-23T10:02:45Z|00128|binding|INFO|537283dc-3bf2-449b-bdff-690ff3ce6572: Claiming unknown
Nov 23 05:02:45 localhost NetworkManager[5990]: <info>  [1763892165.0059] manager: (tap537283dc-3b): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Nov 23 05:02:45 localhost systemd-udevd[316284]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.024 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-e5266388-eb1d-4ef2-ac78-bf3856ad5b84', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5266388-eb1d-4ef2-ac78-bf3856ad5b84', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72717ea9-3adf-4245-b2b7-b38253c2d684, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=537283dc-3bf2-449b-bdff-690ff3ce6572) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.028 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 537283dc-3bf2-449b-bdff-690ff3ce6572 in datapath e5266388-eb1d-4ef2-ac78-bf3856ad5b84 bound to our chassis#033[00m
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.031 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 58f770b9-f511-4d3f-8311-e6098f715c0d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.031 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5266388-eb1d-4ef2-ac78-bf3856ad5b84, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.033 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e88b8241-b717-495f-a72e-35daaa67e11c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.039 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost ovn_controller[153786]: 2025-11-23T10:02:45Z|00129|binding|INFO|Setting lport 537283dc-3bf2-449b-bdff-690ff3ce6572 ovn-installed in OVS
Nov 23 05:02:45 localhost ovn_controller[153786]: 2025-11-23T10:02:45Z|00130|binding|INFO|Setting lport 537283dc-3bf2-449b-bdff-690ff3ce6572 up in Southbound
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.044 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.045 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost journal[229736]: ethtool ioctl error on tap537283dc-3b: No such device
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.086 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.123 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:45.219 2 INFO neutron.agent.securitygroups_rpc [None req-3637ef8e-b231-4ebc-b0ad-62e8972d9361 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.377 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.381 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:45.383 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:02:45 localhost nova_compute[281613]: 2025-11-23 10:02:45.656 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:46 localhost nova_compute[281613]: 2025-11-23 10:02:46.003 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:46 localhost podman[316355]: 
Nov 23 05:02:46 localhost podman[316355]: 2025-11-23 10:02:46.22149282 +0000 UTC m=+0.104787044 container create 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:02:46 localhost podman[316355]: 2025-11-23 10:02:46.167328514 +0000 UTC m=+0.050622768 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:46 localhost systemd[1]: Started libpod-conmon-3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338.scope.
Nov 23 05:02:46 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3d80d386c7859c4ac628dc8bcd70ab89e5c49226e1c2405aa920402c8e93f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:46 localhost podman[316355]: 2025-11-23 10:02:46.328650838 +0000 UTC m=+0.211945052 container init 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:02:46 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:46.337 2 INFO neutron.agent.securitygroups_rpc [None req-c48e4a17-f990-4551-a604-7a1f5baad78f fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:46 localhost podman[316355]: 2025-11-23 10:02:46.344076781 +0000 UTC m=+0.227370995 container start 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 05:02:46 localhost dnsmasq[316373]: started, version 2.85 cachesize 150
Nov 23 05:02:46 localhost dnsmasq[316373]: DNS service limited to local subnets
Nov 23 05:02:46 localhost dnsmasq[316373]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:46 localhost dnsmasq[316373]: warning: no upstream servers configured
Nov 23 05:02:46 localhost dnsmasq-dhcp[316373]: DHCP, static leases only on 10.100.255.240, lease time 1d
Nov 23 05:02:46 localhost dnsmasq[316373]: read /var/lib/neutron/dhcp/e5266388-eb1d-4ef2-ac78-bf3856ad5b84/addn_hosts - 0 addresses
Nov 23 05:02:46 localhost dnsmasq-dhcp[316373]: read /var/lib/neutron/dhcp/e5266388-eb1d-4ef2-ac78-bf3856ad5b84/host
Nov 23 05:02:46 localhost dnsmasq-dhcp[316373]: read /var/lib/neutron/dhcp/e5266388-eb1d-4ef2-ac78-bf3856ad5b84/opts
Nov 23 05:02:46 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:46.385 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:02:46 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:46.609 262721 INFO neutron.agent.dhcp.agent [None req-b0366e0c-c5d1-4aa6-9b2a-a33aa4406b6a - - - - - -] DHCP configuration for ports {'01a84023-261a-49d1-96a2-23364898e9f2'} is completed#033[00m
Nov 23 05:02:46 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:46.888 2 INFO neutron.agent.securitygroups_rpc [None req-69aab5f7-414f-4919-8a4c-b4fcd82ff545 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:46 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:46.943 2 INFO neutron.agent.securitygroups_rpc [None req-19349ae1-ccba-4615-bdb3-c092a3aa234c 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:47 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:47.196 2 INFO neutron.agent.securitygroups_rpc [None req-a4966cd3-a78e-4723-9b2c-0105f97058c4 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:47 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:47.542 2 INFO neutron.agent.securitygroups_rpc [None req-e2113fc4-8297-4e83-8bfc-91b8d2f12d0b fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e134 e134: 6 total, 6 up, 6 in
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.974257) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167974313, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1211, "num_deletes": 259, "total_data_size": 1566417, "memory_usage": 1591856, "flush_reason": "Manual Compaction"}
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167982573, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 813419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21636, "largest_seqno": 22842, "table_properties": {"data_size": 808882, "index_size": 2072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11666, "raw_average_key_size": 21, "raw_value_size": 799296, "raw_average_value_size": 1482, "num_data_blocks": 90, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892107, "oldest_key_time": 1763892107, "file_creation_time": 1763892167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8397 microseconds, and 4596 cpu microseconds.
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.982645) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 813419 bytes OK
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.982683) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.993202) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.993236) EVENT_LOG_v1 {"time_micros": 1763892167993228, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.993269) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1560441, prev total WAL file size 1560441, number of live WAL files 2.
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.994279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323539' seq:0, type:0; will stop at (end)
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(794KB)], [36(15MB)]
Nov 23 05:02:47 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892167994317, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 16844125, "oldest_snapshot_seqno": -1}
Nov 23 05:02:47 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:47.996 2 INFO neutron.agent.securitygroups_rpc [None req-11137031-a253-41b4-b5b7-0c84953c1da3 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12453 keys, 14927929 bytes, temperature: kUnknown
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168046000, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 14927929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14860394, "index_size": 35306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 337121, "raw_average_key_size": 27, "raw_value_size": 14651392, "raw_average_value_size": 1176, "num_data_blocks": 1309, "num_entries": 12453, "num_filter_entries": 12453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.046191) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 14927929 bytes
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.047582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 325.6 rd, 288.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(39.1) write-amplify(18.4) OK, records in: 12960, records dropped: 507 output_compression: NoCompression
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.047597) EVENT_LOG_v1 {"time_micros": 1763892168047590, "job": 20, "event": "compaction_finished", "compaction_time_micros": 51739, "compaction_time_cpu_micros": 22520, "output_level": 6, "num_output_files": 1, "total_output_size": 14927929, "num_input_records": 12960, "num_output_records": 12453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168047739, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892168048833, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:47.994172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.048871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.048877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.048881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.048883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:02:48.048886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:02:48 localhost sshd[316374]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:02:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:02:48 localhost podman[316383]: 2025-11-23 10:02:48.626687235 +0000 UTC m=+0.072558930 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:02:48 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:48.631 2 INFO neutron.agent.securitygroups_rpc [None req-edba85f2-3d5d-4fd2-9504-24d71c7cf655 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:48 localhost podman[316380]: 2025-11-23 10:02:48.659238158 +0000 UTC m=+0.108358662 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:02:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:48.663 262721 INFO neutron.agent.linux.ip_lib [None req-a11f592b-c9c9-49d6-b299-e6c7bd25e464 - - - - - -] Device tapa3a72db0-c1 cannot be used as it has no MAC address#033[00m
Nov 23 05:02:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:48.680 262721 INFO neutron.agent.linux.ip_lib [None req-79429415-3e24-4dbe-8713-ccdb52353654 - - - - - -] Device tap9a94c9b9-4f cannot be used as it has no MAC address#033[00m
Nov 23 05:02:48 localhost podman[316382]: 2025-11-23 10:02:48.686720852 +0000 UTC m=+0.135365833 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.696 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost kernel: device tapa3a72db0-c1 entered promiscuous mode
Nov 23 05:02:48 localhost NetworkManager[5990]: <info>  [1763892168.7045] manager: (tapa3a72db0-c1): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00131|binding|INFO|Claiming lport a3a72db0-c10b-4285-8744-ba10ac2fc343 for this chassis.
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00132|binding|INFO|a3a72db0-c10b-4285-8744-ba10ac2fc343: Claiming unknown
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.708 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost kernel: device tap9a94c9b9-4f entered promiscuous mode
Nov 23 05:02:48 localhost NetworkManager[5990]: <info>  [1763892168.7100] manager: (tap9a94c9b9-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Nov 23 05:02:48 localhost podman[316383]: 2025-11-23 10:02:48.710090222 +0000 UTC m=+0.155961927 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 23 05:02:48 localhost systemd-udevd[316463]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:48 localhost systemd-udevd[316464]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:02:48 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.722 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00133|if_status|INFO|Dropped 2 log messages in last 1429 seconds (most recently, 1429 seconds ago) due to excessive rate
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00134|if_status|INFO|Not updating pb chassis for 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a now as sb is readonly
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.724 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40e3f4f1-42e1-4e9d-b712-f215a04b83ca, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=a3a72db0-c10b-4285-8744-ba10ac2fc343) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00135|binding|INFO|Claiming lport 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a for this chassis.
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00136|binding|INFO|9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a: Claiming unknown
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.725 159429 INFO neutron.agent.ovn.metadata.agent [-] Port a3a72db0-c10b-4285-8744-ba10ac2fc343 in datapath c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b bound to our chassis#033[00m
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.726 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.726 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f791cd0b-fbbe-419e-8fab-80addde24085]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.732 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.735 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00137|binding|INFO|Setting lport a3a72db0-c10b-4285-8744-ba10ac2fc343 ovn-installed in OVS
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.740 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00138|binding|INFO|Setting lport 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a ovn-installed in OVS
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.748 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.766 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost podman[316382]: 2025-11-23 10:02:48.773279705 +0000 UTC m=+0.221924756 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00139|binding|INFO|Setting lport 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a up in Southbound
Nov 23 05:02:48 localhost ovn_controller[153786]: 2025-11-23T10:02:48Z|00140|binding|INFO|Setting lport a3a72db0-c10b-4285-8744-ba10ac2fc343 up in Southbound
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.774 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-945ad566-67bf-4249-891a-85145efe6d8a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-945ad566-67bf-4249-891a-85145efe6d8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbbcca6d-5c4e-4311-a12b-28640754e98f, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.775 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a in datapath 945ad566-67bf-4249-891a-85145efe6d8a bound to our chassis#033[00m
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.776 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 945ad566-67bf-4249-891a-85145efe6d8a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:48.777 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[2c559231-7614-4456-9212-fc582fd34f5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost journal[229736]: ethtool ioctl error on tapa3a72db0-c1: No such device
Nov 23 05:02:48 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:02:48 localhost podman[316380]: 2025-11-23 10:02:48.804641685 +0000 UTC m=+0.253762149 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.813 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:02:48 localhost podman[316381]: 2025-11-23 10:02:48.78259163 +0000 UTC m=+0.231301933 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd)
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.848 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost nova_compute[281613]: 2025-11-23 10:02:48.851 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:48 localhost podman[316381]: 2025-11-23 10:02:48.867950801 +0000 UTC m=+0.316661094 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:02:48 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:02:49 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:49.201 2 INFO neutron.agent.securitygroups_rpc [None req-d908f8b2-2302-4d45-805c-4d35b6de9b08 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:49 localhost systemd[1]: tmp-crun.wvFOxx.mount: Deactivated successfully.
Nov 23 05:02:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:49 localhost podman[316585]: 
Nov 23 05:02:49 localhost podman[316585]: 2025-11-23 10:02:49.900896002 +0000 UTC m=+0.110619754 container create 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:49 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:49.946 2 INFO neutron.agent.securitygroups_rpc [None req-765ea533-9294-4729-95dc-8b0b8371a481 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:02:49 localhost systemd[1]: Started libpod-conmon-6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54.scope.
Nov 23 05:02:49 localhost podman[316585]: 2025-11-23 10:02:49.851456277 +0000 UTC m=+0.061180089 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:49 localhost podman[316605]: 
Nov 23 05:02:49 localhost podman[316605]: 2025-11-23 10:02:49.978652524 +0000 UTC m=+0.103053507 container create 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:02:49 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9483312d48d07b72cad93774495c9c97ef9fb2542da80084b4b126ec4737cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:49 localhost podman[316585]: 2025-11-23 10:02:49.998962551 +0000 UTC m=+0.208686313 container init 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:02:50 localhost podman[316585]: 2025-11-23 10:02:50.011172696 +0000 UTC m=+0.220896428 container start 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:02:50 localhost dnsmasq[316626]: started, version 2.85 cachesize 150
Nov 23 05:02:50 localhost dnsmasq[316626]: DNS service limited to local subnets
Nov 23 05:02:50 localhost dnsmasq[316626]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:50 localhost dnsmasq[316626]: warning: no upstream servers configured
Nov 23 05:02:50 localhost dnsmasq-dhcp[316626]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:02:50 localhost systemd[1]: Started libpod-conmon-685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de.scope.
Nov 23 05:02:50 localhost dnsmasq[316626]: read /var/lib/neutron/dhcp/945ad566-67bf-4249-891a-85145efe6d8a/addn_hosts - 0 addresses
Nov 23 05:02:50 localhost dnsmasq-dhcp[316626]: read /var/lib/neutron/dhcp/945ad566-67bf-4249-891a-85145efe6d8a/host
Nov 23 05:02:50 localhost dnsmasq-dhcp[316626]: read /var/lib/neutron/dhcp/945ad566-67bf-4249-891a-85145efe6d8a/opts
Nov 23 05:02:50 localhost podman[316605]: 2025-11-23 10:02:49.933143236 +0000 UTC m=+0.057544250 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:02:50 localhost systemd[1]: Started libcrun container.
Nov 23 05:02:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc114d603f624142d3d58feed4c57030f8d5d776f129f3dfad1d2626ee0ee291/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:02:50 localhost podman[316605]: 2025-11-23 10:02:50.060837637 +0000 UTC m=+0.185238610 container init 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:02:50 localhost podman[316605]: 2025-11-23 10:02:50.070979026 +0000 UTC m=+0.195380009 container start 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:02:50 localhost dnsmasq[316631]: started, version 2.85 cachesize 150
Nov 23 05:02:50 localhost dnsmasq[316631]: DNS service limited to local subnets
Nov 23 05:02:50 localhost dnsmasq[316631]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:02:50 localhost dnsmasq[316631]: warning: no upstream servers configured
Nov 23 05:02:50 localhost dnsmasq-dhcp[316631]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:02:50 localhost dnsmasq[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/addn_hosts - 0 addresses
Nov 23 05:02:50 localhost dnsmasq-dhcp[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/host
Nov 23 05:02:50 localhost dnsmasq-dhcp[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/opts
Nov 23 05:02:50 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:50.124 2 INFO neutron.agent.securitygroups_rpc [None req-838089d2-b106-4cd8-a2d9-7d5c71329675 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['ff801f98-3c32-488d-a0ec-adbb05a31d18']#033[00m
Nov 23 05:02:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:50.219 262721 INFO neutron.agent.dhcp.agent [None req-a8fac01d-6a0d-412c-8a6a-23fdb0be8819 - - - - - -] DHCP configuration for ports {'6b200fbc-025f-45bb-b04b-f1379342d47f', 'c9f1a57c-09e2-46dc-9890-f35dff0f8295'} is completed#033[00m
Nov 23 05:02:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:50.279 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port bb0609ce-24ff-41e0-8d66-55ee940f4ef5 with type ""#033[00m
Nov 23 05:02:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:50.280 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-945ad566-67bf-4249-891a-85145efe6d8a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-945ad566-67bf-4249-891a-85145efe6d8a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bbbcca6d-5c4e-4311-a12b-28640754e98f, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:50 localhost ovn_controller[153786]: 2025-11-23T10:02:50Z|00141|binding|INFO|Removing iface tap9a94c9b9-4f ovn-installed in OVS
Nov 23 05:02:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:50.281 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a in datapath 945ad566-67bf-4249-891a-85145efe6d8a unbound from our chassis#033[00m
Nov 23 05:02:50 localhost ovn_controller[153786]: 2025-11-23T10:02:50Z|00142|binding|INFO|Removing lport 9a94c9b9-4ff3-4517-aa8e-9cfdf12b805a ovn-installed in OVS
Nov 23 05:02:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:50.281 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 945ad566-67bf-4249-891a-85145efe6d8a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:02:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:50.282 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[852fde54-efed-4400-96b0-be9c20ad805d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.283 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.286 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:50 localhost dnsmasq[316626]: exiting on receipt of SIGTERM
Nov 23 05:02:50 localhost podman[316650]: 2025-11-23 10:02:50.341643227 +0000 UTC m=+0.061923550 container kill 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:02:50 localhost systemd[1]: libpod-6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54.scope: Deactivated successfully.
Nov 23 05:02:50 localhost podman[316665]: 2025-11-23 10:02:50.426648957 +0000 UTC m=+0.060721636 container died 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:02:50 localhost podman[316665]: 2025-11-23 10:02:50.476868644 +0000 UTC m=+0.110941293 container remove 6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-945ad566-67bf-4249-891a-85145efe6d8a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 23 05:02:50 localhost systemd[1]: libpod-conmon-6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54.scope: Deactivated successfully.
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.494 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:50 localhost kernel: device tap9a94c9b9-4f left promiscuous mode
Nov 23 05:02:50 localhost systemd[1]: var-lib-containers-storage-overlay-fd9483312d48d07b72cad93774495c9c97ef9fb2542da80084b4b126ec4737cb-merged.mount: Deactivated successfully.
Nov 23 05:02:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6060e0f973ee9f3dfe21267de31bc171a7b3f0b207a680efa2e2aeec7ad68a54-userdata-shm.mount: Deactivated successfully.
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.519 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:50.538 262721 INFO neutron.agent.dhcp.agent [None req-1b1fcf26-724a-45e0-b931-fa1750140963 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:50.539 262721 INFO neutron.agent.dhcp.agent [None req-1b1fcf26-724a-45e0-b931-fa1750140963 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:02:50 localhost systemd[1]: run-netns-qdhcp\x2d945ad566\x2d67bf\x2d4249\x2d891a\x2d85145efe6d8a.mount: Deactivated successfully.
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.630 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:50 localhost nova_compute[281613]: 2025-11-23 10:02:50.658 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:51 localhost nova_compute[281613]: 2025-11-23 10:02:51.042 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:51 localhost nova_compute[281613]: 2025-11-23 10:02:51.178 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:51 localhost ovn_controller[153786]: 2025-11-23T10:02:51Z|00143|binding|INFO|Releasing lport a3a72db0-c10b-4285-8744-ba10ac2fc343 from this chassis (sb_readonly=0)
Nov 23 05:02:51 localhost ovn_controller[153786]: 2025-11-23T10:02:51Z|00144|binding|INFO|Setting lport a3a72db0-c10b-4285-8744-ba10ac2fc343 down in Southbound
Nov 23 05:02:51 localhost kernel: device tapa3a72db0-c1 left promiscuous mode
Nov 23 05:02:51 localhost nova_compute[281613]: 2025-11-23 10:02:51.190 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:51.190 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40e3f4f1-42e1-4e9d-b712-f215a04b83ca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=a3a72db0-c10b-4285-8744-ba10ac2fc343) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:02:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:51.194 159429 INFO neutron.agent.ovn.metadata.agent [-] Port a3a72db0-c10b-4285-8744-ba10ac2fc343 in datapath c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b unbound from our chassis#033[00m
Nov 23 05:02:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:51.197 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:02:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:02:51.199 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccaaf73-bc09-405f-b955-f18ee2b830c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:02:51 localhost nova_compute[281613]: 2025-11-23 10:02:51.205 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:51 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:51.851 2 INFO neutron.agent.securitygroups_rpc [None req-8ada9fea-3f3e-487c-9d7c-c67e94466d69 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2de10e3b-e1e6-47ac-8eeb-13eb3642fef8']#033[00m
Nov 23 05:02:52 localhost dnsmasq[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/addn_hosts - 0 addresses
Nov 23 05:02:52 localhost dnsmasq-dhcp[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/host
Nov 23 05:02:52 localhost dnsmasq-dhcp[316631]: read /var/lib/neutron/dhcp/c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b/opts
Nov 23 05:02:52 localhost podman[316709]: 2025-11-23 10:02:52.115743189 +0000 UTC m=+0.061974330 container kill 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent [None req-9aac7950-d042-4211-a4e2-612100f309e8 - - - - - -] Unable to reload_allocations dhcp for c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa3a72db0-c1 not found in namespace qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b.
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     return fut.result()
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     raise self._exception
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapa3a72db0-c1 not found in namespace qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b.
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.146 262721 ERROR neutron.agent.dhcp.agent #033[00m
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.150 262721 INFO neutron.agent.dhcp.agent [None req-c53212fc-0a02-4eb3-8d03-6048a7ead8f7 - - - - - -] Synchronizing state#033[00m
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: ERROR   10:02:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: ERROR   10:02:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: ERROR   10:02:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: ERROR   10:02:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: ERROR   10:02:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:02:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.367 262721 INFO neutron.agent.dhcp.agent [None req-59b0b4e6-8302-4ca1-b8f0-04bf2ad318c0 - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.368 262721 INFO neutron.agent.dhcp.agent [-] Starting network c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b dhcp configuration#033[00m
Nov 23 05:02:52 localhost dnsmasq[316631]: exiting on receipt of SIGTERM
Nov 23 05:02:52 localhost podman[316740]: 2025-11-23 10:02:52.566672073 +0000 UTC m=+0.068012256 container kill 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:02:52 localhost systemd[1]: libpod-685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de.scope: Deactivated successfully.
Nov 23 05:02:52 localhost podman[316753]: 2025-11-23 10:02:52.651242841 +0000 UTC m=+0.070291768 container died 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:02:52 localhost podman[316753]: 2025-11-23 10:02:52.687154076 +0000 UTC m=+0.106202973 container cleanup 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 23 05:02:52 localhost systemd[1]: libpod-conmon-685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de.scope: Deactivated successfully.
Nov 23 05:02:52 localhost podman[316755]: 2025-11-23 10:02:52.73469104 +0000 UTC m=+0.142804437 container remove 685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.829 262721 INFO neutron.agent.dhcp.agent [None req-4c89f65a-9b20-4889-b41c-894b4e36188a - - - - - -] Finished network c07f5f67-ecf9-47df-b1c7-cc4a7f7c193b dhcp configuration#033[00m
Nov 23 05:02:52 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:02:52.831 262721 INFO neutron.agent.dhcp.agent [None req-59b0b4e6-8302-4ca1-b8f0-04bf2ad318c0 - - - - - -] Synchronizing state complete#033[00m
Nov 23 05:02:53 localhost nova_compute[281613]: 2025-11-23 10:02:53.008 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:53 localhost systemd[1]: var-lib-containers-storage-overlay-bc114d603f624142d3d58feed4c57030f8d5d776f129f3dfad1d2626ee0ee291-merged.mount: Deactivated successfully.
Nov 23 05:02:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-685e00650a8a89521697d4d178b3df0167848c9892326d35bd7bf4f9778593de-userdata-shm.mount: Deactivated successfully.
Nov 23 05:02:53 localhost systemd[1]: run-netns-qdhcp\x2dc07f5f67\x2decf9\x2d47df\x2db1c7\x2dcc4a7f7c193b.mount: Deactivated successfully.
Nov 23 05:02:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e135 e135: 6 total, 6 up, 6 in
Nov 23 05:02:54 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:54.304 2 INFO neutron.agent.securitygroups_rpc [None req-70f0f8e2-9250-4661-ae6f-f58e5f669345 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['dc864c5c-de53-475b-960e-083ffe4e3e6b']#033[00m
Nov 23 05:02:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:02:55 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:55.052 2 INFO neutron.agent.securitygroups_rpc [None req-ebacb72f-43a4-4e7f-beed-d94b86f532ab fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['dc864c5c-de53-475b-960e-083ffe4e3e6b']#033[00m
Nov 23 05:02:55 localhost nova_compute[281613]: 2025-11-23 10:02:55.660 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:55 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:55.739 2 INFO neutron.agent.securitygroups_rpc [None req-bb083733-91fb-4356-a91b-0dce7c35cb96 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d9259f0b-7c30-4dce-b81e-e0f698e442c7']#033[00m
Nov 23 05:02:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e136 e136: 6 total, 6 up, 6 in
Nov 23 05:02:56 localhost nova_compute[281613]: 2025-11-23 10:02:56.080 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:02:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:56.612 2 INFO neutron.agent.securitygroups_rpc [None req-79d36424-581f-4db8-8c43-da9cef64debb fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2b80d24e-7954-4669-8041-3d535b2f9be2']#033[00m
Nov 23 05:02:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:56.753 2 INFO neutron.agent.securitygroups_rpc [None req-aea3005f-d734-4d7a-a8c4-fa6242ccbee5 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['2b80d24e-7954-4669-8041-3d535b2f9be2']#033[00m
Nov 23 05:02:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e137 e137: 6 total, 6 up, 6 in
Nov 23 05:02:58 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:58.578 2 INFO neutron.agent.securitygroups_rpc [None req-f9017110-ec89-4835-91db-9aa9b814a12e 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d9259f0b-7c30-4dce-b81e-e0f698e442c7', '469976a2-fa36-45e6-842e-95bc93db1438']#033[00m
Nov 23 05:02:58 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:58.757 2 INFO neutron.agent.securitygroups_rpc [None req-17703c58-f506-4a75-8387-af7c0c3c8d74 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:02:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:59.186 2 INFO neutron.agent.securitygroups_rpc [None req-133cea40-228c-4af6-8f6b-d4d2d5a2eb51 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:02:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:59.340 2 INFO neutron.agent.securitygroups_rpc [None req-75de0287-b544-4fab-ad82-848c9edeaf4f fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:02:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:59.531 2 INFO neutron.agent.securitygroups_rpc [None req-94b4dea1-0a52-41d4-90a7-1f1aefba76c5 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['469976a2-fa36-45e6-842e-95bc93db1438']#033[00m
Nov 23 05:02:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:02:59.725 2 INFO neutron.agent.securitygroups_rpc [None req-d3ac2373-b722-49eb-839d-a87ede7d08ac fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:02:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e138 e138: 6 total, 6 up, 6 in
Nov 23 05:03:00 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:00.554 2 INFO neutron.agent.securitygroups_rpc [None req-6ed9d977-5486-448d-8f02-a426fdb94759 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:03:00 localhost nova_compute[281613]: 2025-11-23 10:03:00.664 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:01 localhost nova_compute[281613]: 2025-11-23 10:03:01.115 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:01 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:01.555 2 INFO neutron.agent.securitygroups_rpc [None req-d75c0dda-e48d-4259-9ad6-e58217c5f7b4 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['da2a8078-9813-4e58-b587-8e4d75d37f47']#033[00m
Nov 23 05:03:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:02.816 2 INFO neutron.agent.securitygroups_rpc [None req-1875a888-d52f-4402-8541-5b1512187260 fc7a3bc93bd3430296903212121be4cd d68a0c4a9a444afd973c16a38020089b - - default default] Security group rule updated ['c4aad9b2-b8cd-4803-b28f-3e773406a427']#033[00m
Nov 23 05:03:02 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e139 e139: 6 total, 6 up, 6 in
Nov 23 05:03:03 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:03.485 2 INFO neutron.agent.securitygroups_rpc [None req-d59d1e53-d1c4-451d-80ab-ba4648fa7a20 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['cb577a71-e41d-409b-b673-d883cbdab535']#033[00m
Nov 23 05:03:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e140 e140: 6 total, 6 up, 6 in
Nov 23 05:03:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:05.170 262721 INFO neutron.agent.linux.ip_lib [None req-48b22f18-0b41-4210-8bdc-2d213a52b7c4 - - - - - -] Device tap7b8175d3-23 cannot be used as it has no MAC address#033[00m
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.194 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:05 localhost kernel: device tap7b8175d3-23 entered promiscuous mode
Nov 23 05:03:05 localhost NetworkManager[5990]: <info>  [1763892185.2043] manager: (tap7b8175d3-23): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Nov 23 05:03:05 localhost ovn_controller[153786]: 2025-11-23T10:03:05Z|00145|binding|INFO|Claiming lport 7b8175d3-23c5-4287-b150-ab741e319c50 for this chassis.
Nov 23 05:03:05 localhost ovn_controller[153786]: 2025-11-23T10:03:05Z|00146|binding|INFO|7b8175d3-23c5-4287-b150-ab741e319c50: Claiming unknown
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.206 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:05 localhost systemd-udevd[316794]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:05.220 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49ebd7a691dd4ea59ffbe9f5703e77e4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e956203-ccec-4e0d-b2cd-a19e87dc158b, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=7b8175d3-23c5-4287-b150-ab741e319c50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:05.221 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8175d3-23c5-4287-b150-ab741e319c50 in datapath 27537d61-8ae5-47a8-b217-f913cbb83ef7 bound to our chassis#033[00m
Nov 23 05:03:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:05.223 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 27537d61-8ae5-47a8-b217-f913cbb83ef7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:05.224 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e33cc359-0cbc-47e8-9354-89e395783a63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:05 localhost ovn_controller[153786]: 2025-11-23T10:03:05Z|00147|binding|INFO|Setting lport 7b8175d3-23c5-4287-b150-ab741e319c50 ovn-installed in OVS
Nov 23 05:03:05 localhost ovn_controller[153786]: 2025-11-23T10:03:05Z|00148|binding|INFO|Setting lport 7b8175d3-23c5-4287-b150-ab741e319c50 up in Southbound
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.248 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.284 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:05 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:05 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/792766910' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:05 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:05 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/792766910' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.318 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:05 localhost nova_compute[281613]: 2025-11-23 10:03:05.666 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:06 localhost nova_compute[281613]: 2025-11-23 10:03:06.153 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:06 localhost podman[316849]: 
Nov 23 05:03:06 localhost podman[316849]: 2025-11-23 10:03:06.40891611 +0000 UTC m=+0.090855632 container create a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:03:06 localhost systemd[1]: Started libpod-conmon-a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b.scope.
Nov 23 05:03:06 localhost podman[316849]: 2025-11-23 10:03:06.363193857 +0000 UTC m=+0.045133439 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:06 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08911954a5790264d9775dfef45b3980716bd60dd773057dcf231639aca5d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:06 localhost podman[316849]: 2025-11-23 10:03:06.480101641 +0000 UTC m=+0.162041163 container init a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:06 localhost podman[316849]: 2025-11-23 10:03:06.486332453 +0000 UTC m=+0.168272045 container start a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:03:06 localhost dnsmasq[316867]: started, version 2.85 cachesize 150
Nov 23 05:03:06 localhost dnsmasq[316867]: DNS service limited to local subnets
Nov 23 05:03:06 localhost dnsmasq[316867]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:06 localhost dnsmasq[316867]: warning: no upstream servers configured
Nov 23 05:03:06 localhost dnsmasq-dhcp[316867]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:03:06 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 0 addresses
Nov 23 05:03:06 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:03:06 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:03:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:06.630 2 INFO neutron.agent.securitygroups_rpc [None req-3cc8ae58-6898-4e12-983b-8f9645bd7e63 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['a9134bcb-5194-43f8-ac1f-875c59af23f5', '736c2f34-1be4-42fe-9283-c00aaa4f421b', 'cb577a71-e41d-409b-b673-d883cbdab535']#033[00m
Nov 23 05:03:06 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:06.689 262721 INFO neutron.agent.dhcp.agent [None req-7b2820b9-60b8-4318-96c3-461d5b7f59b7 - - - - - -] DHCP configuration for ports {'fbcc667a-03be-4e7a-b7ea-70d45337df41'} is completed#033[00m
Nov 23 05:03:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e141 e141: 6 total, 6 up, 6 in
Nov 23 05:03:07 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:07.653 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:07.713 2 INFO neutron.agent.securitygroups_rpc [None req-63da6581-da02-4188-8e89-eab01a812a16 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['736c2f34-1be4-42fe-9283-c00aaa4f421b', 'a9134bcb-5194-43f8-ac1f-875c59af23f5']#033[00m
Nov 23 05:03:07 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:07.863 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:07Z, description=, device_id=48033520-90eb-4ec0-b819-02a2d28de042, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790afb2b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b80fd0>], id=6715ce30-32cc-4398-bd68-f1e643febbc0, ip_allocation=immediate, mac_address=fa:16:3e:1a:b9:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2170, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:03:07Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:03:08 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:03:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:03:08 localhost podman[316885]: 2025-11-23 10:03:08.102060792 +0000 UTC m=+0.061051854 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 05:03:08 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:03:08 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:08.381 262721 INFO neutron.agent.dhcp.agent [None req-fb759385-890f-4038-84d8-03cdb29fb2ce - - - - - -] DHCP configuration for ports {'6715ce30-32cc-4398-bd68-f1e643febbc0'} is completed#033[00m
Nov 23 05:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:03:09 localhost podman[316905]: 2025-11-23 10:03:09.188360178 +0000 UTC m=+0.089590038 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 05:03:09 localhost podman[316905]: 2025-11-23 10:03:09.20596967 +0000 UTC m=+0.107199490 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Nov 23 05:03:09 localhost podman[316906]: 2025-11-23 10:03:09.24860679 +0000 UTC m=+0.148286397 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 05:03:09 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:03:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:09.270 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:09.270 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:09.271 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:09 localhost podman[316907]: 2025-11-23 10:03:09.346659498 +0000 UTC m=+0.241028399 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:03:09 localhost podman[316907]: 2025-11-23 10:03:09.35622967 +0000 UTC m=+0.250598531 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:03:09 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:03:09 localhost podman[316906]: 2025-11-23 10:03:09.412419451 +0000 UTC m=+0.312099128 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:03:09 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:03:09 localhost nova_compute[281613]: 2025-11-23 10:03:09.746 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:10 localhost nova_compute[281613]: 2025-11-23 10:03:10.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:10 localhost nova_compute[281613]: 2025-11-23 10:03:10.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 05:03:10 localhost nova_compute[281613]: 2025-11-23 10:03:10.075 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 05:03:10 localhost systemd[1]: tmp-crun.6Ai3uh.mount: Deactivated successfully.
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:03:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:03:10 localhost nova_compute[281613]: 2025-11-23 10:03:10.670 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/95924581' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/95924581' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:11 localhost nova_compute[281613]: 2025-11-23 10:03:11.154 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:11 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:11.238 2 INFO neutron.agent.securitygroups_rpc [None req-261eb8d2-9d50-4b0d-8acd-8f9cb046e671 80c914b51bd84d48bc00092fc8abcee7 1e3b93ef61044aafb71b30163a32d7ac - - default default] Security group member updated ['d7ead8f7-80d5-4103-ab91-19b87956485a']#033[00m
Nov 23 05:03:11 localhost podman[240144]: time="2025-11-23T10:03:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:03:11 localhost podman[240144]: @ - - [23/Nov/2025:10:03:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159978 "" "Go-http-client/1.1"
Nov 23 05:03:11 localhost podman[240144]: @ - - [23/Nov/2025:10:03:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1"
Nov 23 05:03:11 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:11.727 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:11Z, description=, device_id=48033520-90eb-4ec0-b819-02a2d28de042, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790aa03a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790aa02e0>], id=87986377-8c37-4f69-804c-98b65b7c182d, ip_allocation=immediate, mac_address=fa:16:3e:9e:53:c9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:02Z, description=, dns_domain=, id=27537d61-8ae5-47a8-b217-f913cbb83ef7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1492741888-network, port_security_enabled=True, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62870, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2143, status=ACTIVE, subnets=['03b71d3e-a613-43b4-8aab-cffb0ff2b328'], tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:04Z, vlan_transparent=None, network_id=27537d61-8ae5-47a8-b217-f913cbb83ef7, port_security_enabled=False, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2184, status=DOWN, tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:11Z on network 27537d61-8ae5-47a8-b217-f913cbb83ef7#033[00m
Nov 23 05:03:11 localhost systemd[1]: tmp-crun.Asi6dr.mount: Deactivated successfully.
Nov 23 05:03:11 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e142 e142: 6 total, 6 up, 6 in
Nov 23 05:03:11 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 1 addresses
Nov 23 05:03:11 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:03:11 localhost podman[316980]: 2025-11-23 10:03:11.955744413 +0000 UTC m=+0.074399280 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 23 05:03:11 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:03:12 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:12.251 262721 INFO neutron.agent.dhcp.agent [None req-ad3d9282-9747-4dbf-acc5-ce2dad79d70b - - - - - -] DHCP configuration for ports {'87986377-8c37-4f69-804c-98b65b7c182d'} is completed#033[00m
Nov 23 05:03:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e143 e143: 6 total, 6 up, 6 in
Nov 23 05:03:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:13 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2534350437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:13 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2534350437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:14 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:14.130 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:11Z, description=, device_id=48033520-90eb-4ec0-b819-02a2d28de042, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b66100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d16a30>], id=87986377-8c37-4f69-804c-98b65b7c182d, ip_allocation=immediate, mac_address=fa:16:3e:9e:53:c9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:02Z, description=, dns_domain=, id=27537d61-8ae5-47a8-b217-f913cbb83ef7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1492741888-network, port_security_enabled=True, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62870, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2143, status=ACTIVE, subnets=['03b71d3e-a613-43b4-8aab-cffb0ff2b328'], tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:04Z, vlan_transparent=None, network_id=27537d61-8ae5-47a8-b217-f913cbb83ef7, port_security_enabled=False, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2184, status=DOWN, tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:11Z on network 27537d61-8ae5-47a8-b217-f913cbb83ef7#033[00m
Nov 23 05:03:14 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 1 addresses
Nov 23 05:03:14 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:03:14 localhost podman[317019]: 2025-11-23 10:03:14.352662723 +0000 UTC m=+0.057639282 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:03:14 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:03:14 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:14.593 262721 INFO neutron.agent.dhcp.agent [None req-b21eb42f-de86-4b47-90b9-f53863c5d662 - - - - - -] DHCP configuration for ports {'87986377-8c37-4f69-804c-98b65b7c182d'} is completed#033[00m
Nov 23 05:03:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/329276956' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/329276956' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:15 localhost nova_compute[281613]: 2025-11-23 10:03:15.076 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:15 localhost nova_compute[281613]: 2025-11-23 10:03:15.077 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:15 localhost nova_compute[281613]: 2025-11-23 10:03:15.077 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:15 localhost nova_compute[281613]: 2025-11-23 10:03:15.078 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:03:15 localhost nova_compute[281613]: 2025-11-23 10:03:15.672 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/327852875' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:15 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:15 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/327852875' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.202 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:16.546 262721 INFO neutron.agent.linux.ip_lib [None req-7698b705-57ab-410b-85af-1c8145429f6a - - - - - -] Device tap7dc31c98-8b cannot be used as it has no MAC address#033[00m
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.566 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:16 localhost kernel: device tap7dc31c98-8b entered promiscuous mode
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.575 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:16 localhost ovn_controller[153786]: 2025-11-23T10:03:16Z|00149|binding|INFO|Claiming lport 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 for this chassis.
Nov 23 05:03:16 localhost ovn_controller[153786]: 2025-11-23T10:03:16Z|00150|binding|INFO|7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334: Claiming unknown
Nov 23 05:03:16 localhost NetworkManager[5990]: <info>  [1763892196.5790] manager: (tap7dc31c98-8b): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Nov 23 05:03:16 localhost systemd-udevd[317049]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:16.594 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ccd366-c31e-40c3-b79a-78f9dbeabdd5, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:16.596 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 in datapath 839e4a81-3d81-4b25-9a08-972f0365d054 bound to our chassis#033[00m
Nov 23 05:03:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:16.598 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 839e4a81-3d81-4b25-9a08-972f0365d054 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:16 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:16.600 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f5f6da-acae-49a6-a3e3-7dce23eeab44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost ovn_controller[153786]: 2025-11-23T10:03:16Z|00151|binding|INFO|Setting lport 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 ovn-installed in OVS
Nov 23 05:03:16 localhost ovn_controller[153786]: 2025-11-23T10:03:16Z|00152|binding|INFO|Setting lport 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 up in Southbound
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.608 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost journal[229736]: ethtool ioctl error on tap7dc31c98-8b: No such device
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.642 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:16 localhost nova_compute[281613]: 2025-11-23 10:03:16.667 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:17 localhost nova_compute[281613]: 2025-11-23 10:03:17.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:17 localhost nova_compute[281613]: 2025-11-23 10:03:17.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:03:17 localhost nova_compute[281613]: 2025-11-23 10:03:17.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:03:17 localhost nova_compute[281613]: 2025-11-23 10:03:17.043 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:03:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e144 e144: 6 total, 6 up, 6 in
Nov 23 05:03:17 localhost podman[317120]: 
Nov 23 05:03:17 localhost podman[317120]: 2025-11-23 10:03:17.527103549 +0000 UTC m=+0.090192223 container create 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 05:03:17 localhost systemd[1]: Started libpod-conmon-225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2.scope.
Nov 23 05:03:17 localhost podman[317120]: 2025-11-23 10:03:17.4848228 +0000 UTC m=+0.047911524 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:17 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5417e9a3b07179b95bd82ae815fe0d42b2fedb4c94914e43dd532f8c663c4afc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:17 localhost podman[317120]: 2025-11-23 10:03:17.605048537 +0000 UTC m=+0.168137211 container init 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:17 localhost podman[317120]: 2025-11-23 10:03:17.615488832 +0000 UTC m=+0.178577516 container start 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:17 localhost dnsmasq[317138]: started, version 2.85 cachesize 150
Nov 23 05:03:17 localhost dnsmasq[317138]: DNS service limited to local subnets
Nov 23 05:03:17 localhost dnsmasq[317138]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:17 localhost dnsmasq[317138]: warning: no upstream servers configured
Nov 23 05:03:17 localhost dnsmasq-dhcp[317138]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:03:17 localhost dnsmasq[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/addn_hosts - 0 addresses
Nov 23 05:03:17 localhost dnsmasq-dhcp[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/host
Nov 23 05:03:17 localhost dnsmasq-dhcp[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/opts
Nov 23 05:03:17 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:17.828 262721 INFO neutron.agent.dhcp.agent [None req-603caf1f-566c-4b98-80c7-d3c98af796f5 - - - - - -] DHCP configuration for ports {'43aa9782-19c9-4369-aa81-80d553f1d492'} is completed#033[00m
Nov 23 05:03:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e145 e145: 6 total, 6 up, 6 in
Nov 23 05:03:18 localhost dnsmasq[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/addn_hosts - 0 addresses
Nov 23 05:03:18 localhost dnsmasq-dhcp[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/host
Nov 23 05:03:18 localhost dnsmasq-dhcp[317138]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/opts
Nov 23 05:03:18 localhost podman[317154]: 2025-11-23 10:03:18.038091029 +0000 UTC m=+0.061098446 container kill 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 05:03:18 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:18.302 262721 INFO neutron.agent.dhcp.agent [None req-87929e05-1262-4c33-ab25-e026e47c19d8 - - - - - -] DHCP configuration for ports {'7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334', '43aa9782-19c9-4369-aa81-80d553f1d492'} is completed#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.021 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:03:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.323 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.324 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.324 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.324 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.324 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:19 localhost podman[317183]: 2025-11-23 10:03:19.376447335 +0000 UTC m=+0.073753963 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 05:03:19 localhost podman[317175]: 2025-11-23 10:03:19.422680533 +0000 UTC m=+0.127437886 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:19 localhost podman[317183]: 2025-11-23 10:03:19.433831178 +0000 UTC m=+0.131137796 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 05:03:19 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:03:19 localhost podman[317175]: 2025-11-23 10:03:19.455834361 +0000 UTC m=+0.160591704 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 23 05:03:19 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:03:19 localhost podman[317176]: 2025-11-23 10:03:19.528591506 +0000 UTC m=+0.228772993 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 05:03:19 localhost podman[317176]: 2025-11-23 10:03:19.544757829 +0000 UTC m=+0.244939336 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 05:03:19 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:03:19 localhost podman[317177]: 2025-11-23 10:03:19.590913326 +0000 UTC m=+0.291269928 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:03:19 localhost podman[317177]: 2025-11-23 10:03:19.602975706 +0000 UTC m=+0.303332328 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:03:19 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:03:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:03:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/887069621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:03:19 localhost nova_compute[281613]: 2025-11-23 10:03:19.761 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.010 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.012 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11637MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.013 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.013 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:20 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:20.083 262721 INFO neutron.agent.linux.ip_lib [None req-0368de3d-f8fc-46b4-9515-d06959b2bb64 - - - - - -] Device tap54dc2cc4-17 cannot be used as it has no MAC address#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.154 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost kernel: device tap54dc2cc4-17 entered promiscuous mode
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.162 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost systemd-udevd[317291]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00153|binding|INFO|Claiming lport 54dc2cc4-177d-4576-b8fe-0d054982705b for this chassis.
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00154|binding|INFO|54dc2cc4-177d-4576-b8fe-0d054982705b: Claiming unknown
Nov 23 05:03:20 localhost NetworkManager[5990]: <info>  [1763892200.1701] manager: (tap54dc2cc4-17): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.177 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f314adb8-57dd-4564-82f6-958b39055cae, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=54dc2cc4-177d-4576-b8fe-0d054982705b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.182 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 54dc2cc4-177d-4576-b8fe-0d054982705b in datapath 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7 bound to our chassis#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.184 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.185 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe01462-5953-46e1-8045-c3d775df8d6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.188 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.189 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00155|binding|INFO|Setting lport 54dc2cc4-177d-4576-b8fe-0d054982705b ovn-installed in OVS
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.209 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00156|binding|INFO|Setting lport 54dc2cc4-177d-4576-b8fe-0d054982705b up in Southbound
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.250 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.274 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.278 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.298 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.298 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.465 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 05:03:20 localhost dnsmasq[317138]: exiting on receipt of SIGTERM
Nov 23 05:03:20 localhost systemd[1]: libpod-225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2.scope: Deactivated successfully.
Nov 23 05:03:20 localhost podman[317325]: 2025-11-23 10:03:20.495921749 +0000 UTC m=+0.063057900 container kill 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.497 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.513 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:20 localhost systemd[1]: tmp-crun.sijseB.mount: Deactivated successfully.
Nov 23 05:03:20 localhost podman[317345]: 2025-11-23 10:03:20.562711039 +0000 UTC m=+0.049012744 container died 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:20 localhost systemd[1]: var-lib-containers-storage-overlay-5417e9a3b07179b95bd82ae815fe0d42b2fedb4c94914e43dd532f8c663c4afc-merged.mount: Deactivated successfully.
Nov 23 05:03:20 localhost podman[317345]: 2025-11-23 10:03:20.605948725 +0000 UTC m=+0.092250400 container remove 225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:03:20 localhost systemd[1]: libpod-conmon-225be9c450ebd334ce82fb73f10ffa52dba8d07521c82e3b2e90e764d00b93d2.scope: Deactivated successfully.
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00157|binding|INFO|Releasing lport 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 from this chassis (sb_readonly=0)
Nov 23 05:03:20 localhost kernel: device tap7dc31c98-8b left promiscuous mode
Nov 23 05:03:20 localhost ovn_controller[153786]: 2025-11-23T10:03:20Z|00158|binding|INFO|Setting lport 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 down in Southbound
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.625 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.640 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 910ff207-d758-4954-9a43-f2b492e750e8 with type ""#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.642 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ccd366-c31e-40c3-b79a-78f9dbeabdd5, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.645 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 7dc31c98-8b8b-42c0-a3d7-ccd9e3bf9334 in datapath 839e4a81-3d81-4b25-9a08-972f0365d054 unbound from our chassis#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.646 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.650 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 839e4a81-3d81-4b25-9a08-972f0365d054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:03:20 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:20.651 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ed14e8-6acc-416e-8b2c-526fdd599bda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:20 localhost nova_compute[281613]: 2025-11-23 10:03:20.675 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:20 localhost systemd[1]: run-netns-qdhcp\x2d839e4a81\x2d3d81\x2d4b25\x2d9a08\x2d972f0365d054.mount: Deactivated successfully.
Nov 23 05:03:20 localhost sshd[317398]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:03:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:03:20 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1650261008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:20.998 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:21.007 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:21.023 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:21.025 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:21.027 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:21 localhost podman[317424]: 
Nov 23 05:03:21 localhost podman[317424]: 2025-11-23 10:03:21.124504353 +0000 UTC m=+0.087729796 container create c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:21 localhost systemd[1]: Started libpod-conmon-c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a.scope.
Nov 23 05:03:21 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782fd9f6eff4e0eab05449a04120b9e69ed50e27b03cd4c9b0a1ad12127e8578/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:21 localhost podman[317424]: 2025-11-23 10:03:21.081645268 +0000 UTC m=+0.044870741 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:21 localhost podman[317424]: 2025-11-23 10:03:21.187051918 +0000 UTC m=+0.150277371 container init c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:21 localhost podman[317424]: 2025-11-23 10:03:21.197135545 +0000 UTC m=+0.160361048 container start c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:21 localhost dnsmasq[317442]: started, version 2.85 cachesize 150
Nov 23 05:03:21 localhost dnsmasq[317442]: DNS service limited to local subnets
Nov 23 05:03:21 localhost dnsmasq[317442]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:21 localhost dnsmasq[317442]: warning: no upstream servers configured
Nov 23 05:03:21 localhost dnsmasq-dhcp[317442]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:03:21 localhost dnsmasq[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/addn_hosts - 0 addresses
Nov 23 05:03:21 localhost dnsmasq-dhcp[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/host
Nov 23 05:03:21 localhost dnsmasq-dhcp[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/opts
Nov 23 05:03:21 localhost nova_compute[281613]: 2025-11-23 10:03:21.239 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:21.257 262721 INFO neutron.agent.dhcp.agent [None req-59b0b4e6-8302-4ca1-b8f0-04bf2ad318c0 - - - - - -] Synchronizing state#033[00m
Nov 23 05:03:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:21.343 262721 INFO neutron.agent.dhcp.agent [None req-fe1e7ea4-2e8b-46e2-bfda-80a9cb56409f - - - - - -] DHCP configuration for ports {'853eb836-a881-456f-9730-9e16890d6702'} is completed#033[00m
Nov 23 05:03:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:21.528 262721 INFO neutron.agent.dhcp.agent [None req-c59f367d-9267-4383-8720-9aee06b65a5f - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 05:03:21 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:21.535 262721 INFO neutron.agent.dhcp.agent [-] Starting network 839e4a81-3d81-4b25-9a08-972f0365d054 dhcp configuration#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.028 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.029 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.047 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:22 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:22.124 262721 INFO neutron.agent.linux.ip_lib [None req-9047eb54-765d-4ded-9186-8daff45aac3f - - - - - -] Device tapedffecea-5a cannot be used as it has no MAC address#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.150 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost kernel: device tapedffecea-5a entered promiscuous mode
Nov 23 05:03:22 localhost NetworkManager[5990]: <info>  [1763892202.1588] manager: (tapedffecea-5a): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Nov 23 05:03:22 localhost ovn_controller[153786]: 2025-11-23T10:03:22Z|00159|binding|INFO|Claiming lport edffecea-5ab5-4bb0-a6d9-ef3d97accc9a for this chassis.
Nov 23 05:03:22 localhost ovn_controller[153786]: 2025-11-23T10:03:22Z|00160|binding|INFO|edffecea-5ab5-4bb0-a6d9-ef3d97accc9a: Claiming unknown
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.159 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost systemd-udevd[317293]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:22 localhost ovn_controller[153786]: 2025-11-23T10:03:22Z|00161|binding|INFO|Setting lport edffecea-5ab5-4bb0-a6d9-ef3d97accc9a ovn-installed in OVS
Nov 23 05:03:22 localhost ovn_controller[153786]: 2025-11-23T10:03:22Z|00162|binding|INFO|Setting lport edffecea-5ab5-4bb0-a6d9-ef3d97accc9a up in Southbound
Nov 23 05:03:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:22.170 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ccd366-c31e-40c3-b79a-78f9dbeabdd5, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=edffecea-5ab5-4bb0-a6d9-ef3d97accc9a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.171 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:22.173 159429 INFO neutron.agent.ovn.metadata.agent [-] Port edffecea-5ab5-4bb0-a6d9-ef3d97accc9a in datapath 839e4a81-3d81-4b25-9a08-972f0365d054 bound to our chassis#033[00m
Nov 23 05:03:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:22.174 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 839e4a81-3d81-4b25-9a08-972f0365d054 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.174 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:22.175 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[20128df5-6291-4a71-a63a-bda06e60b4c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.194 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost journal[229736]: ethtool ioctl error on tapedffecea-5a: No such device
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.229 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost nova_compute[281613]: 2025-11-23 10:03:22.261 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: ERROR   10:03:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: ERROR   10:03:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: ERROR   10:03:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: ERROR   10:03:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: ERROR   10:03:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:03:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:03:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e146 e146: 6 total, 6 up, 6 in
Nov 23 05:03:23 localhost podman[317522]: 
Nov 23 05:03:23 localhost podman[317522]: 2025-11-23 10:03:23.116350276 +0000 UTC m=+0.076743075 container create 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:03:23 localhost nova_compute[281613]: 2025-11-23 10:03:23.158 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:23 localhost systemd[1]: Started libpod-conmon-1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c.scope.
Nov 23 05:03:23 localhost systemd[1]: tmp-crun.AmoEdw.mount: Deactivated successfully.
Nov 23 05:03:23 localhost podman[317522]: 2025-11-23 10:03:23.08477349 +0000 UTC m=+0.045166349 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:23 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7d47744333b0a6813c5307c3ba468c5eef9aaa8bf48332cb353c2bde352a9e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:23 localhost podman[317522]: 2025-11-23 10:03:23.210635511 +0000 UTC m=+0.171028340 container init 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 05:03:23 localhost podman[317522]: 2025-11-23 10:03:23.220250824 +0000 UTC m=+0.180643653 container start 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:23 localhost dnsmasq[317540]: started, version 2.85 cachesize 150
Nov 23 05:03:23 localhost dnsmasq[317540]: DNS service limited to local subnets
Nov 23 05:03:23 localhost dnsmasq[317540]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:23 localhost dnsmasq[317540]: warning: no upstream servers configured
Nov 23 05:03:23 localhost dnsmasq-dhcp[317540]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:03:23 localhost dnsmasq[317540]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/addn_hosts - 0 addresses
Nov 23 05:03:23 localhost dnsmasq-dhcp[317540]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/host
Nov 23 05:03:23 localhost dnsmasq-dhcp[317540]: read /var/lib/neutron/dhcp/839e4a81-3d81-4b25-9a08-972f0365d054/opts
Nov 23 05:03:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:23.279 262721 INFO neutron.agent.dhcp.agent [None req-9047eb54-765d-4ded-9186-8daff45aac3f - - - - - -] Finished network 839e4a81-3d81-4b25-9a08-972f0365d054 dhcp configuration#033[00m
Nov 23 05:03:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:23.280 262721 INFO neutron.agent.dhcp.agent [None req-c59f367d-9267-4383-8720-9aee06b65a5f - - - - - -] Synchronizing state complete#033[00m
Nov 23 05:03:23 localhost nova_compute[281613]: 2025-11-23 10:03:23.355 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:23 localhost ovn_controller[153786]: 2025-11-23T10:03:23Z|00163|binding|INFO|Releasing lport edffecea-5ab5-4bb0-a6d9-ef3d97accc9a from this chassis (sb_readonly=0)
Nov 23 05:03:23 localhost kernel: device tapedffecea-5a left promiscuous mode
Nov 23 05:03:23 localhost ovn_controller[153786]: 2025-11-23T10:03:23Z|00164|binding|INFO|Setting lport edffecea-5ab5-4bb0-a6d9-ef3d97accc9a down in Southbound
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.368 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-839e4a81-3d81-4b25-9a08-972f0365d054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6cc558ab5ea444ca89055d39fcd5b762', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8ccd366-c31e-40c3-b79a-78f9dbeabdd5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=edffecea-5ab5-4bb0-a6d9-ef3d97accc9a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.370 159429 INFO neutron.agent.ovn.metadata.agent [-] Port edffecea-5ab5-4bb0-a6d9-ef3d97accc9a in datapath 839e4a81-3d81-4b25-9a08-972f0365d054 unbound from our chassis#033[00m
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.372 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 839e4a81-3d81-4b25-9a08-972f0365d054 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.373 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[1341064f-9c58-4873-bcc2-7828a107d1ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:23 localhost nova_compute[281613]: 2025-11-23 10:03:23.385 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:23 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:03:23 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:03:23 localhost podman[317560]: 2025-11-23 10:03:23.532892107 +0000 UTC m=+0.073048744 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:23 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:03:23 localhost nova_compute[281613]: 2025-11-23 10:03:23.664 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:23 localhost ovn_controller[153786]: 2025-11-23T10:03:23Z|00165|binding|INFO|Releasing lport 54dc2cc4-177d-4576-b8fe-0d054982705b from this chassis (sb_readonly=0)
Nov 23 05:03:23 localhost ovn_controller[153786]: 2025-11-23T10:03:23Z|00166|binding|INFO|Setting lport 54dc2cc4-177d-4576-b8fe-0d054982705b down in Southbound
Nov 23 05:03:23 localhost kernel: device tap54dc2cc4-17 left promiscuous mode
Nov 23 05:03:23 localhost dnsmasq[317540]: exiting on receipt of SIGTERM
Nov 23 05:03:23 localhost podman[317590]: 2025-11-23 10:03:23.675145656 +0000 UTC m=+0.071476540 container kill 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.676 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f314adb8-57dd-4564-82f6-958b39055cae, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=54dc2cc4-177d-4576-b8fe-0d054982705b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.678 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 54dc2cc4-177d-4576-b8fe-0d054982705b in datapath 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7 unbound from our chassis#033[00m
Nov 23 05:03:23 localhost systemd[1]: libpod-1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c.scope: Deactivated successfully.
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.682 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:03:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:23.682 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e165c71e-d0b0-4055-8c12-74d88cc7ac6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:23 localhost nova_compute[281613]: 2025-11-23 10:03:23.685 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:23 localhost podman[317612]: 2025-11-23 10:03:23.762229205 +0000 UTC m=+0.054049384 container died 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:23 localhost podman[317612]: 2025-11-23 10:03:23.802055887 +0000 UTC m=+0.093876036 container remove 1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-839e4a81-3d81-4b25-9a08-972f0365d054, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:03:23 localhost systemd[1]: libpod-conmon-1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c.scope: Deactivated successfully.
Nov 23 05:03:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:23.826 262721 INFO neutron.agent.dhcp.agent [None req-65babeb3-19e1-499f-8468-c3dd05073475 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:24 localhost nova_compute[281613]: 2025-11-23 10:03:24.023 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:24 localhost systemd[1]: var-lib-containers-storage-overlay-f7d47744333b0a6813c5307c3ba468c5eef9aaa8bf48332cb353c2bde352a9e5-merged.mount: Deactivated successfully.
Nov 23 05:03:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a6d20458357ee55ab219ed4650d38099a93744926f87f24aa8cba4b4c88a40c-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:24 localhost systemd[1]: run-netns-qdhcp\x2d839e4a81\x2d3d81\x2d4b25\x2d9a08\x2d972f0365d054.mount: Deactivated successfully.
Nov 23 05:03:24 localhost dnsmasq[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/addn_hosts - 0 addresses
Nov 23 05:03:24 localhost dnsmasq-dhcp[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/host
Nov 23 05:03:24 localhost podman[317654]: 2025-11-23 10:03:24.334593918 +0000 UTC m=+0.085552097 container kill c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 05:03:24 localhost dnsmasq-dhcp[317442]: read /var/lib/neutron/dhcp/9a7fb5b2-b1e2-4c40-a660-0aef70612eb7/opts
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent [None req-680ead57-714b-475e-9253-3cfa9b0588a3 - - - - - -] Unable to reload_allocations dhcp for 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap54dc2cc4-17 not found in namespace qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7.
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     return fut.result()
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     raise self._exception
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap54dc2cc4-17 not found in namespace qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7.
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.369 262721 ERROR neutron.agent.dhcp.agent #033[00m
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.376 262721 INFO neutron.agent.dhcp.agent [None req-c59f367d-9267-4383-8720-9aee06b65a5f - - - - - -] Synchronizing state#033[00m
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.642 262721 INFO neutron.agent.dhcp.agent [None req-d85b9d4f-26f1-4699-a3ff-df977de67cac - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.644 262721 INFO neutron.agent.dhcp.agent [-] Starting network 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7 dhcp configuration#033[00m
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.644 262721 INFO neutron.agent.dhcp.agent [-] Finished network 9a7fb5b2-b1e2-4c40-a660-0aef70612eb7 dhcp configuration#033[00m
Nov 23 05:03:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:24.645 262721 INFO neutron.agent.dhcp.agent [None req-d85b9d4f-26f1-4699-a3ff-df977de67cac - - - - - -] Synchronizing state complete#033[00m
Nov 23 05:03:24 localhost podman[317684]: 2025-11-23 10:03:24.843483761 +0000 UTC m=+0.062851215 container kill c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 05:03:24 localhost dnsmasq[317442]: exiting on receipt of SIGTERM
Nov 23 05:03:24 localhost systemd[1]: libpod-c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a.scope: Deactivated successfully.
Nov 23 05:03:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:24 localhost podman[317698]: 2025-11-23 10:03:24.912723329 +0000 UTC m=+0.057107867 container died c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:03:24 localhost podman[317698]: 2025-11-23 10:03:24.962776861 +0000 UTC m=+0.107161359 container cleanup c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:03:24 localhost systemd[1]: libpod-conmon-c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a.scope: Deactivated successfully.
Nov 23 05:03:25 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:25.012 2 INFO neutron.agent.securitygroups_rpc [None req-82f52ebc-5307-4b75-9da0-dca3e27d739d da47bb8e9ce044b7a6c60aeaa303445e 1ed74022d4944d5c8276b163cae1a73a - - default default] Security group member updated ['0c7393ad-63e1-4b57-bb16-ccf2466506e9']#033[00m
Nov 23 05:03:25 localhost podman[317702]: 2025-11-23 10:03:25.066707981 +0000 UTC m=+0.200931250 container remove c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a7fb5b2-b1e2-4c40-a660-0aef70612eb7, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:25 localhost systemd[1]: tmp-crun.51p8jR.mount: Deactivated successfully.
Nov 23 05:03:25 localhost systemd[1]: var-lib-containers-storage-overlay-782fd9f6eff4e0eab05449a04120b9e69ed50e27b03cd4c9b0a1ad12127e8578-merged.mount: Deactivated successfully.
Nov 23 05:03:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1ed86c23e25b0736a357296b9ab4b49084c76798ee2bf2175476e0eb78d2b3a-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:25 localhost systemd[1]: run-netns-qdhcp\x2d9a7fb5b2\x2db1e2\x2d4c40\x2da660\x2d0aef70612eb7.mount: Deactivated successfully.
Nov 23 05:03:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:25.339 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:24Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa7915f04c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790aa0c40>], id=7916e8c2-b107-4026-9a9a-d41e080efc2f, ip_allocation=immediate, mac_address=fa:16:3e:1b:fd:ff, name=tempest-RoutersAdminNegativeTest-685379603, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=True, project_id=1ed74022d4944d5c8276b163cae1a73a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0c7393ad-63e1-4b57-bb16-ccf2466506e9'], standard_attr_id=2254, status=DOWN, tags=[], tenant_id=1ed74022d4944d5c8276b163cae1a73a, updated_at=2025-11-23T10:03:24Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:03:25 localhost podman[317746]: 2025-11-23 10:03:25.534077515 +0000 UTC m=+0.061703653 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:25 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:03:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:03:25 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:03:25 localhost nova_compute[281613]: 2025-11-23 10:03:25.676 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:25.823 262721 INFO neutron.agent.dhcp.agent [None req-96c0651f-126d-4c9b-b9d2-a82af357304f - - - - - -] DHCP configuration for ports {'7916e8c2-b107-4026-9a9a-d41e080efc2f'} is completed#033[00m
Nov 23 05:03:26 localhost nova_compute[281613]: 2025-11-23 10:03:26.262 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e147 e147: 6 total, 6 up, 6 in
Nov 23 05:03:27 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:27.684 2 INFO neutron.agent.securitygroups_rpc [None req-746b6928-7309-4035-bfd7-9d9f95de5728 da47bb8e9ce044b7a6c60aeaa303445e 1ed74022d4944d5c8276b163cae1a73a - - default default] Security group member updated ['0c7393ad-63e1-4b57-bb16-ccf2466506e9']#033[00m
Nov 23 05:03:28 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:03:28 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:03:28 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:03:28 localhost podman[317784]: 2025-11-23 10:03:28.089719876 +0000 UTC m=+0.073745783 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 05:03:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e148 e148: 6 total, 6 up, 6 in
Nov 23 05:03:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e149 e149: 6 total, 6 up, 6 in
Nov 23 05:03:30 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:30.174 2 INFO neutron.agent.securitygroups_rpc [None req-7bdd1f21-8588-4120-93d3-3c530b607701 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:30 localhost nova_compute[281613]: 2025-11-23 10:03:30.679 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:31 localhost nova_compute[281613]: 2025-11-23 10:03:31.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:31 localhost nova_compute[281613]: 2025-11-23 10:03:31.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 05:03:31 localhost nova_compute[281613]: 2025-11-23 10:03:31.031 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:31 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e150 e150: 6 total, 6 up, 6 in
Nov 23 05:03:31 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:31.140 2 INFO neutron.agent.securitygroups_rpc [None req-04f8e66d-7c6f-40a2-b025-3f4a41cc4961 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:31 localhost nova_compute[281613]: 2025-11-23 10:03:31.301 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:31 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:31.850 2 INFO neutron.agent.securitygroups_rpc [None req-493a4cea-33ac-4b44-8c12-ef9399b27484 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:31 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:03:31 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1092660839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:03:32 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:32.392 2 INFO neutron.agent.securitygroups_rpc [None req-88038149-5d7c-44ef-b630-13d9beb8c240 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e151 e151: 6 total, 6 up, 6 in
Nov 23 05:03:33 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:33.419 2 INFO neutron.agent.securitygroups_rpc [None req-6155fed5-7bb5-4f1e-ab16-bb23a0937b77 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e152 e152: 6 total, 6 up, 6 in
Nov 23 05:03:34 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:34.747 2 INFO neutron.agent.securitygroups_rpc [None req-0eb2e2be-3d24-4ace-a008-a7d7e29c4328 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e153 e153: 6 total, 6 up, 6 in
Nov 23 05:03:35 localhost nova_compute[281613]: 2025-11-23 10:03:35.683 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:36 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:36.186 2 INFO neutron.agent.securitygroups_rpc [None req-a71f4ada-2ee4-4e29-a05b-0c137a49cc85 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:36 localhost nova_compute[281613]: 2025-11-23 10:03:36.348 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:36 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:36.437 2 INFO neutron.agent.securitygroups_rpc [None req-606c84ff-d0b3-4da5-8d08-f4db462a1bb4 a8a12d646f734219a5736bd9a89106d3 cd27ceae55c44d478998092e7554fd8a - - default default] Security group member updated ['57d92d06-0a9a-469b-b69f-4fb9e6e560cf']#033[00m
Nov 23 05:03:36 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:36.973 2 INFO neutron.agent.securitygroups_rpc [None req-f95b4467-0d9c-4e77-aa60-8f1596442e50 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e154 e154: 6 total, 6 up, 6 in
Nov 23 05:03:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:03:37 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/327806816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:03:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:03:37 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/327806816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:03:37 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:37.607 2 INFO neutron.agent.securitygroups_rpc [None req-dffb97f7-0929-42aa-ac92-4890704581f2 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e155 e155: 6 total, 6 up, 6 in
Nov 23 05:03:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e156 e156: 6 total, 6 up, 6 in
Nov 23 05:03:39 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:39.212 2 INFO neutron.agent.securitygroups_rpc [None req-9d87b2e2-86da-485b-bbea-cfc6692748e1 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:03:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:03:39 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:39.627 262721 INFO neutron.agent.linux.ip_lib [None req-4069d045-7309-49bd-8385-afc14b6df8fd - - - - - -] Device tapcf2d53ae-c9 cannot be used as it has no MAC address#033[00m
Nov 23 05:03:39 localhost systemd[1]: tmp-crun.Gj1to4.mount: Deactivated successfully.
Nov 23 05:03:39 localhost podman[317809]: 2025-11-23 10:03:39.638112023 +0000 UTC m=+0.110733005 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 23 05:03:39 localhost nova_compute[281613]: 2025-11-23 10:03:39.708 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:39 localhost podman[317809]: 2025-11-23 10:03:39.71223781 +0000 UTC m=+0.184858762 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:03:39 localhost kernel: device tapcf2d53ae-c9 entered promiscuous mode
Nov 23 05:03:39 localhost NetworkManager[5990]: <info>  [1763892219.7194] manager: (tapcf2d53ae-c9): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Nov 23 05:03:39 localhost podman[317810]: 2025-11-23 10:03:39.721267383 +0000 UTC m=+0.186573378 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:03:39 localhost ovn_controller[153786]: 2025-11-23T10:03:39Z|00167|binding|INFO|Claiming lport cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 for this chassis.
Nov 23 05:03:39 localhost ovn_controller[153786]: 2025-11-23T10:03:39Z|00168|binding|INFO|cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5: Claiming unknown
Nov 23 05:03:39 localhost systemd-udevd[317877]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:39 localhost podman[317810]: 2025-11-23 10:03:39.729184347 +0000 UTC m=+0.194490272 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:03:39 localhost nova_compute[281613]: 2025-11-23 10:03:39.727 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:39 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:39.734 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-039a0f7c-b431-41eb-99e2-b1b80b702d7f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039a0f7c-b431-41eb-99e2-b1b80b702d7f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87bc92c-583a-45b4-a2d2-b28c0226a769, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:39 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:39.735 159429 INFO neutron.agent.ovn.metadata.agent [-] Port cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 in datapath 039a0f7c-b431-41eb-99e2-b1b80b702d7f bound to our chassis#033[00m
Nov 23 05:03:39 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:39.738 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 54272d8f-f264-46a7-9c0a-0a5b6aefad8d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 05:03:39 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:39.738 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039a0f7c-b431-41eb-99e2-b1b80b702d7f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:03:39 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:39.740 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[acc81788-e517-4ebf-b22f-dc6ddb954d5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:39 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost ovn_controller[153786]: 2025-11-23T10:03:39Z|00169|binding|INFO|Setting lport cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 ovn-installed in OVS
Nov 23 05:03:39 localhost ovn_controller[153786]: 2025-11-23T10:03:39Z|00170|binding|INFO|Setting lport cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 up in Southbound
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost nova_compute[281613]: 2025-11-23 10:03:39.754 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:03:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2238 writes, 23K keys, 2238 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2238 writes, 2238 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2238 writes, 23K keys, 2238 commit groups, 1.0 writes per commit group, ingest: 38.58 MB, 0.06 MB/s#012Interval WAL: 2238 writes, 2238 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    187.0      0.14              0.07        10    0.014       0      0       0.0       0.0#012  L6      1/0   14.24 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   5.4    231.6    212.9      0.69              0.39         9    0.076    114K   4476       0.0       0.0#012 Sum      1/0   14.24 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   6.4    191.3    208.4      0.83              0.46        19    0.044    114K   4476       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   6.4    191.9    209.0      0.83              0.46        18    0.046    114K   4476       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    231.6    212.9      0.69              0.39         9    0.076    114K   4476       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    190.3      0.14              0.07         9    0.016       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.026#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.29 MB/s write, 0.16 GB read, 0.26 MB/s read, 0.8 seconds#012Interval compaction: 0.17 GB write, 0.29 MB/s write, 0.16 GB read, 0.26 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5569c8721350#2 capacity: 308.00 MB usage: 11.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000146 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(591,11.11 MB,3.60858%) FilterBlock(19,333.55 KB,0.105756%) IndexBlock(19,423.52 KB,0.134282%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost journal[229736]: ethtool ioctl error on tapcf2d53ae-c9: No such device
Nov 23 05:03:39 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:03:39 localhost podman[317808]: 2025-11-23 10:03:39.733170265 +0000 UTC m=+0.208797158 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Nov 23 05:03:39 localhost nova_compute[281613]: 2025-11-23 10:03:39.792 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:39 localhost podman[317808]: 2025-11-23 10:03:39.819061299 +0000 UTC m=+0.294688182 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Nov 23 05:03:39 localhost nova_compute[281613]: 2025-11-23 10:03:39.821 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:39 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:03:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:39 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:39.939 2 INFO neutron.agent.securitygroups_rpc [None req-0cc7d2c8-7386-4953-9ec2-42e302a377e6 a8a12d646f734219a5736bd9a89106d3 cd27ceae55c44d478998092e7554fd8a - - default default] Security group member updated ['57d92d06-0a9a-469b-b69f-4fb9e6e560cf']#033[00m
Nov 23 05:03:40 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e157 e157: 6 total, 6 up, 6 in
Nov 23 05:03:40 localhost systemd[1]: tmp-crun.E98H7O.mount: Deactivated successfully.
Nov 23 05:03:40 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:40.581 2 INFO neutron.agent.securitygroups_rpc [None req-bd23e853-4daf-4921-98db-bb97f636a505 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:40 localhost nova_compute[281613]: 2025-11-23 10:03:40.685 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:40 localhost podman[318018]: 
Nov 23 05:03:40 localhost podman[318018]: 2025-11-23 10:03:40.88999683 +0000 UTC m=+0.093659926 container create 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:40 localhost systemd[1]: Started libpod-conmon-29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9.scope.
Nov 23 05:03:40 localhost podman[318018]: 2025-11-23 10:03:40.844056401 +0000 UTC m=+0.047719528 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:40 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9328994ac1175af0b6eba1cb00a33ce272bd0913f66bf76b4edf46876d50002/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:40 localhost podman[318018]: 2025-11-23 10:03:40.992187114 +0000 UTC m=+0.195850210 container init 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 05:03:41 localhost podman[318018]: 2025-11-23 10:03:41.000691323 +0000 UTC m=+0.204354449 container start 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:41 localhost dnsmasq[318055]: started, version 2.85 cachesize 150
Nov 23 05:03:41 localhost dnsmasq[318055]: DNS service limited to local subnets
Nov 23 05:03:41 localhost dnsmasq[318055]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:41 localhost dnsmasq[318055]: warning: no upstream servers configured
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:03:41 localhost dnsmasq[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/addn_hosts - 0 addresses
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/host
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/opts
Nov 23 05:03:41 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:03:41 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.190 262721 INFO neutron.agent.dhcp.agent [None req-02506e63-ae37-40f9-940b-ae491e219db4 - - - - - -] DHCP configuration for ports {'77229e05-a31e-41cf-a451-9e18b7a6fa29'} is completed#033[00m
Nov 23 05:03:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:41.207 2 INFO neutron.agent.securitygroups_rpc [None req-8d10cea8-bd4a-4441-b879-9913f6e3c03c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:41 localhost ovn_controller[153786]: 2025-11-23T10:03:41Z|00171|binding|INFO|Removing iface tapcf2d53ae-c9 ovn-installed in OVS
Nov 23 05:03:41 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:41.234 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 54272d8f-f264-46a7-9c0a-0a5b6aefad8d with type ""#033[00m
Nov 23 05:03:41 localhost ovn_controller[153786]: 2025-11-23T10:03:41Z|00172|binding|INFO|Removing lport cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 ovn-installed in OVS
Nov 23 05:03:41 localhost nova_compute[281613]: 2025-11-23 10:03:41.236 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:41 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:41.236 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-039a0f7c-b431-41eb-99e2-b1b80b702d7f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039a0f7c-b431-41eb-99e2-b1b80b702d7f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b87bc92c-583a-45b4-a2d2-b28c0226a769, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:41 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:41.240 159429 INFO neutron.agent.ovn.metadata.agent [-] Port cf2d53ae-c9e8-42f3-9fb2-d16c11dc85a5 in datapath 039a0f7c-b431-41eb-99e2-b1b80b702d7f unbound from our chassis#033[00m
Nov 23 05:03:41 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:41.243 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039a0f7c-b431-41eb-99e2-b1b80b702d7f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:03:41 localhost nova_compute[281613]: 2025-11-23 10:03:41.243 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:41 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:41.244 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[820a6777-b093-46fc-801f-e721ec020095]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:41 localhost podman[240144]: time="2025-11-23T10:03:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:03:41 localhost podman[240144]: @ - - [23/Nov/2025:10:03:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161802 "" "Go-http-client/1.1"
Nov 23 05:03:41 localhost nova_compute[281613]: 2025-11-23 10:03:41.349 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:41 localhost podman[240144]: @ - - [23/Nov/2025:10:03:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20626 "" "Go-http-client/1.1"
Nov 23 05:03:41 localhost dnsmasq[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/addn_hosts - 0 addresses
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/host
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/opts
Nov 23 05:03:41 localhost podman[318073]: 2025-11-23 10:03:41.447652698 +0000 UTC m=+0.069217116 container kill 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:41 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e158 e158: 6 total, 6 up, 6 in
Nov 23 05:03:41 localhost nova_compute[281613]: 2025-11-23 10:03:41.657 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:41 localhost kernel: device tapcf2d53ae-c9 left promiscuous mode
Nov 23 05:03:41 localhost nova_compute[281613]: 2025-11-23 10:03:41.673 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:41 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:41.703 2 INFO neutron.agent.securitygroups_rpc [None req-99b4e496-02de-4948-b003-eb8832f49bd1 92c363a19c824119a63f49b23b9d66e1 6fc3fa728d6f4403acd9944d81eaeb18 - - default default] Security group member updated ['33cf6d6a-dd4c-4287-83e4-f8bbab9f55b2']#033[00m
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.721 262721 INFO neutron.agent.dhcp.agent [None req-a74fa26c-e03c-43ae-86c2-0b1c08100b68 - - - - - -] DHCP configuration for ports {'77229e05-a31e-41cf-a451-9e18b7a6fa29'} is completed#033[00m
Nov 23 05:03:41 localhost dnsmasq[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/addn_hosts - 0 addresses
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/host
Nov 23 05:03:41 localhost dnsmasq-dhcp[318055]: read /var/lib/neutron/dhcp/039a0f7c-b431-41eb-99e2-b1b80b702d7f/opts
Nov 23 05:03:41 localhost podman[318114]: 2025-11-23 10:03:41.933113291 +0000 UTC m=+0.066184185 container kill 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent [None req-1c3f966c-9d91-4bf8-b858-ffa5fd013aab - - - - - -] Unable to reload_allocations dhcp for 039a0f7c-b431-41eb-99e2-b1b80b702d7f.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapcf2d53ae-c9 not found in namespace qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f.
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     return fut.result()
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     raise self._exception
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapcf2d53ae-c9 not found in namespace qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f.
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.967 262721 ERROR neutron.agent.dhcp.agent #033[00m
Nov 23 05:03:41 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:41.971 262721 INFO neutron.agent.dhcp.agent [None req-d85b9d4f-26f1-4699-a3ff-df977de67cac - - - - - -] Synchronizing state#033[00m
Nov 23 05:03:42 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:42.175 2 INFO neutron.agent.securitygroups_rpc [None req-dbf580d8-52e8-4ecb-9364-932e14668854 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group rule updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m
Nov 23 05:03:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:42.180 262721 INFO neutron.agent.dhcp.agent [None req-9310ef4e-562d-43da-b523-0c0ef17ce6d4 - - - - - -] All active networks have been fetched through RPC.#033[00m
Nov 23 05:03:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:42.181 262721 INFO neutron.agent.dhcp.agent [-] Starting network 039a0f7c-b431-41eb-99e2-b1b80b702d7f dhcp configuration#033[00m
Nov 23 05:03:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:42.182 262721 INFO neutron.agent.dhcp.agent [-] Finished network 039a0f7c-b431-41eb-99e2-b1b80b702d7f dhcp configuration#033[00m
Nov 23 05:03:42 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:42.182 262721 INFO neutron.agent.dhcp.agent [None req-9310ef4e-562d-43da-b523-0c0ef17ce6d4 - - - - - -] Synchronizing state complete#033[00m
Nov 23 05:03:42 localhost nova_compute[281613]: 2025-11-23 10:03:42.355 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:42 localhost dnsmasq[318055]: exiting on receipt of SIGTERM
Nov 23 05:03:42 localhost podman[318145]: 2025-11-23 10:03:42.46262285 +0000 UTC m=+0.045168968 container kill 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:42 localhost systemd[1]: libpod-29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9.scope: Deactivated successfully.
Nov 23 05:03:42 localhost podman[318158]: 2025-11-23 10:03:42.532639857 +0000 UTC m=+0.056530444 container died 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:42 localhost systemd[1]: tmp-crun.6BOToC.mount: Deactivated successfully.
Nov 23 05:03:42 localhost systemd[1]: var-lib-containers-storage-overlay-e9328994ac1175af0b6eba1cb00a33ce272bd0913f66bf76b4edf46876d50002-merged.mount: Deactivated successfully.
Nov 23 05:03:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:42 localhost podman[318158]: 2025-11-23 10:03:42.575836901 +0000 UTC m=+0.099727448 container cleanup 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:03:42 localhost systemd[1]: libpod-conmon-29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9.scope: Deactivated successfully.
Nov 23 05:03:42 localhost podman[318160]: 2025-11-23 10:03:42.623277139 +0000 UTC m=+0.136247372 container remove 29a025437d2aa67bf83e4875744fb0dc35474fad154a0179adb78c2a2af863c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-039a0f7c-b431-41eb-99e2-b1b80b702d7f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:03:42 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:42.738 2 INFO neutron.agent.securitygroups_rpc [None req-d8c6f897-4254-46a6-9c9c-683b3a672b23 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group rule updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m
Nov 23 05:03:42 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e159 e159: 6 total, 6 up, 6 in
Nov 23 05:03:43 localhost systemd[1]: run-netns-qdhcp\x2d039a0f7c\x2db431\x2d41eb\x2d99e2\x2db1b80b702d7f.mount: Deactivated successfully.
Nov 23 05:03:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:03:44 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:44.279 2 INFO neutron.agent.securitygroups_rpc [None req-71dcb7fc-4cba-40db-b8c8-6bad4f6af9d0 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:44 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:44.501 2 INFO neutron.agent.securitygroups_rpc [None req-71dcb7fc-4cba-40db-b8c8-6bad4f6af9d0 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:45 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:45.119 2 INFO neutron.agent.securitygroups_rpc [None req-cef19edc-dbb5-4bcd-8945-0c2ead165d91 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:45 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:45.325 262721 INFO neutron.agent.linux.ip_lib [None req-cc019f56-2b17-4ecf-b006-aaac32beec79 - - - - - -] Device tapfb67bd2f-57 cannot be used as it has no MAC address#033[00m
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.350 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost kernel: device tapfb67bd2f-57 entered promiscuous mode
Nov 23 05:03:45 localhost NetworkManager[5990]: <info>  [1763892225.3601] manager: (tapfb67bd2f-57): new Generic device (/org/freedesktop/NetworkManager/Devices/37)
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.361 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost ovn_controller[153786]: 2025-11-23T10:03:45Z|00173|binding|INFO|Claiming lport fb67bd2f-5708-4486-a2a8-f2c37eb94275 for this chassis.
Nov 23 05:03:45 localhost ovn_controller[153786]: 2025-11-23T10:03:45Z|00174|binding|INFO|fb67bd2f-5708-4486-a2a8-f2c37eb94275: Claiming unknown
Nov 23 05:03:45 localhost systemd-udevd[318196]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.378 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-660f6865-bd58-4109-8f41-861e77e9eadf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-660f6865-bd58-4109-8f41-861e77e9eadf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18d758bf-4799-4c57-b042-859139ff90c2, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=fb67bd2f-5708-4486-a2a8-f2c37eb94275) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.381 159429 INFO neutron.agent.ovn.metadata.agent [-] Port fb67bd2f-5708-4486-a2a8-f2c37eb94275 in datapath 660f6865-bd58-4109-8f41-861e77e9eadf bound to our chassis#033[00m
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.383 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 660f6865-bd58-4109-8f41-861e77e9eadf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.384 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[917e904a-8b4f-4c8b-8251-499820c33f09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost ovn_controller[153786]: 2025-11-23T10:03:45Z|00175|binding|INFO|Setting lport fb67bd2f-5708-4486-a2a8-f2c37eb94275 ovn-installed in OVS
Nov 23 05:03:45 localhost ovn_controller[153786]: 2025-11-23T10:03:45Z|00176|binding|INFO|Setting lport fb67bd2f-5708-4486-a2a8-f2c37eb94275 up in Southbound
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.399 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost journal[229736]: ethtool ioctl error on tapfb67bd2f-57: No such device
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.440 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.476 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.623 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.625 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:45.626 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:03:45 localhost nova_compute[281613]: 2025-11-23 10:03:45.689 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:46 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:46.270 2 INFO neutron.agent.securitygroups_rpc [None req-b1ac9adf-2928-4e8c-b93d-9a7afa468620 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:46 localhost podman[318267]: 
Nov 23 05:03:46 localhost podman[318267]: 2025-11-23 10:03:46.33580707 +0000 UTC m=+0.103094640 container create fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:46 localhost nova_compute[281613]: 2025-11-23 10:03:46.378 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:46 localhost podman[318267]: 2025-11-23 10:03:46.282546894 +0000 UTC m=+0.049834905 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:46 localhost systemd[1]: Started libpod-conmon-fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d.scope.
Nov 23 05:03:46 localhost systemd[1]: tmp-crun.pFxCvW.mount: Deactivated successfully.
Nov 23 05:03:46 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ee11b22e1da3b42cd6facbc349bdade46ecda87108f1c86c4b378fc9476071/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:46 localhost podman[318267]: 2025-11-23 10:03:46.426320119 +0000 UTC m=+0.193607679 container init fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:03:46 localhost podman[318267]: 2025-11-23 10:03:46.437300115 +0000 UTC m=+0.204587675 container start fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 05:03:46 localhost dnsmasq[318285]: started, version 2.85 cachesize 150
Nov 23 05:03:46 localhost dnsmasq[318285]: DNS service limited to local subnets
Nov 23 05:03:46 localhost dnsmasq[318285]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:46 localhost dnsmasq[318285]: warning: no upstream servers configured
Nov 23 05:03:46 localhost dnsmasq-dhcp[318285]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Nov 23 05:03:46 localhost dnsmasq[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/addn_hosts - 0 addresses
Nov 23 05:03:46 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/host
Nov 23 05:03:46 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/opts
Nov 23 05:03:46 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:46.507 262721 INFO neutron.agent.dhcp.agent [None req-cc019f56-2b17-4ecf-b006-aaac32beec79 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:45Z, description=, device_id=7c3a9632-bfdc-43d0-b10b-17ae94ecc6b5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cca7c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cca9d0>], id=c251768f-24cc-4deb-be5e-796907d80152, ip_allocation=immediate, mac_address=fa:16:3e:82:9f:47, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:43Z, description=, dns_domain=, id=660f6865-bd58-4109-8f41-861e77e9eadf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-365949370, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4558, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2338, status=ACTIVE, subnets=['a942c8ef-152b-4c98-8b46-63b98eb18009'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:44Z, vlan_transparent=None, network_id=660f6865-bd58-4109-8f41-861e77e9eadf, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2355, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:45Z on network 660f6865-bd58-4109-8f41-861e77e9eadf#033[00m
Nov 23 05:03:46 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:46.637 262721 INFO neutron.agent.dhcp.agent [None req-1044abff-5fd5-4186-98e8-b6e454a47ca5 - - - - - -] DHCP configuration for ports {'ae8f8b07-4cc4-48c8-ad1e-25cc90b2bbdb'} is completed#033[00m
Nov 23 05:03:46 localhost dnsmasq[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/addn_hosts - 1 addresses
Nov 23 05:03:46 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/host
Nov 23 05:03:46 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/opts
Nov 23 05:03:46 localhost podman[318304]: 2025-11-23 10:03:46.723287092 +0000 UTC m=+0.062298050 container kill fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:03:47 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:47.024 262721 INFO neutron.agent.dhcp.agent [None req-af05d8a6-dae7-4b77-be17-c512c8a07bef - - - - - -] DHCP configuration for ports {'c251768f-24cc-4deb-be5e-796907d80152'} is completed#033[00m
Nov 23 05:03:47 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:47.627 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:47 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:47.722 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:45Z, description=, device_id=7c3a9632-bfdc-43d0-b10b-17ae94ecc6b5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b45070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790d41790>], id=c251768f-24cc-4deb-be5e-796907d80152, ip_allocation=immediate, mac_address=fa:16:3e:82:9f:47, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:43Z, description=, dns_domain=, id=660f6865-bd58-4109-8f41-861e77e9eadf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-365949370, port_security_enabled=True, project_id=cd27ceae55c44d478998092e7554fd8a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=4558, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2338, status=ACTIVE, subnets=['a942c8ef-152b-4c98-8b46-63b98eb18009'], tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:44Z, vlan_transparent=None, network_id=660f6865-bd58-4109-8f41-861e77e9eadf, port_security_enabled=False, project_id=cd27ceae55c44d478998092e7554fd8a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2355, status=DOWN, tags=[], tenant_id=cd27ceae55c44d478998092e7554fd8a, updated_at=2025-11-23T10:03:45Z on network 660f6865-bd58-4109-8f41-861e77e9eadf#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.788 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.789 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.815 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.936 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.937 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.943 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 23 05:03:47 localhost nova_compute[281613]: 2025-11-23 10:03:47.943 281617 INFO nova.compute.claims [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Claim successful on node np0005532586.localdomain#033[00m
Nov 23 05:03:47 localhost dnsmasq[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/addn_hosts - 1 addresses
Nov 23 05:03:47 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/host
Nov 23 05:03:47 localhost podman[318343]: 2025-11-23 10:03:47.950428922 +0000 UTC m=+0.072676250 container kill fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:03:47 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/opts
Nov 23 05:03:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e160 e160: 6 total, 6 up, 6 in
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.086 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:48 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:48.291 262721 INFO neutron.agent.dhcp.agent [None req-f96950a3-694d-4204-bc33-959bea9087ec - - - - - -] DHCP configuration for ports {'c251768f-24cc-4deb-be5e-796907d80152'} is completed#033[00m
Nov 23 05:03:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:03:48 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/516761612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.575 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.583 281617 DEBUG nova.compute.provider_tree [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.598 281617 DEBUG nova.scheduler.client.report [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.687 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.688 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.938 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.939 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.954 281617 INFO nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 23 05:03:48 localhost nova_compute[281613]: 2025-11-23 10:03:48.971 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.078 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.080 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.081 281617 INFO nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Creating image(s)#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.118 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.158 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.198 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.207 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.282 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.284 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.285 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.286 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "ba971d9ef74673015953b46ad4dbea47e54dd66a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.322 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.328 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.359 281617 DEBUG nova.policy [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5f7e9736cbc74ce4ac3de51c4ac84504', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '49ebd7a691dd4ea59ffbe9f5703e77e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 23 05:03:49 localhost dnsmasq[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/addn_hosts - 0 addresses
Nov 23 05:03:49 localhost podman[318494]: 2025-11-23 10:03:49.59236947 +0000 UTC m=+0.071226300 container kill fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:49 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/host
Nov 23 05:03:49 localhost dnsmasq-dhcp[318285]: read /var/lib/neutron/dhcp/660f6865-bd58-4109-8f41-861e77e9eadf/opts
Nov 23 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:03:49 localhost systemd[1]: tmp-crun.31VEVF.mount: Deactivated successfully.
Nov 23 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:03:49 localhost podman[318509]: 2025-11-23 10:03:49.737551282 +0000 UTC m=+0.115831992 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:03:49 localhost podman[318511]: 2025-11-23 10:03:49.77306543 +0000 UTC m=+0.147693352 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:49 localhost podman[318509]: 2025-11-23 10:03:49.816918411 +0000 UTC m=+0.195199101 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.833 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:49 localhost ovn_controller[153786]: 2025-11-23T10:03:49Z|00177|binding|INFO|Releasing lport fb67bd2f-5708-4486-a2a8-f2c37eb94275 from this chassis (sb_readonly=0)
Nov 23 05:03:49 localhost kernel: device tapfb67bd2f-57 left promiscuous mode
Nov 23 05:03:49 localhost ovn_controller[153786]: 2025-11-23T10:03:49Z|00178|binding|INFO|Setting lport fb67bd2f-5708-4486-a2a8-f2c37eb94275 down in Southbound
Nov 23 05:03:49 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:03:49 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:49.850 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-660f6865-bd58-4109-8f41-861e77e9eadf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-660f6865-bd58-4109-8f41-861e77e9eadf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd27ceae55c44d478998092e7554fd8a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18d758bf-4799-4c57-b042-859139ff90c2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=fb67bd2f-5708-4486-a2a8-f2c37eb94275) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:49 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:49.852 159429 INFO neutron.agent.ovn.metadata.agent [-] Port fb67bd2f-5708-4486-a2a8-f2c37eb94275 in datapath 660f6865-bd58-4109-8f41-861e77e9eadf unbound from our chassis#033[00m
Nov 23 05:03:49 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:49.853 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 660f6865-bd58-4109-8f41-861e77e9eadf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:49 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:49.854 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[c94c6b70-de44-4eba-9dce-05f5ba2a4fd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.858 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:49 localhost podman[318511]: 2025-11-23 10:03:49.865081979 +0000 UTC m=+0.239709931 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 05:03:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:49 localhost podman[318510]: 2025-11-23 10:03:49.872290373 +0000 UTC m=+0.250648645 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 05:03:49 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:03:49 localhost podman[318510]: 2025-11-23 10:03:49.882627822 +0000 UTC m=+0.260986134 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:03:49 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:03:49 localhost nova_compute[281613]: 2025-11-23 10:03:49.967 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ba971d9ef74673015953b46ad4dbea47e54dd66a 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:49 localhost podman[318553]: 2025-11-23 10:03:49.978210208 +0000 UTC m=+0.235131067 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:03:49 localhost podman[318553]: 2025-11-23 10:03:49.9868135 +0000 UTC m=+0.243734349 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:03:50 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.074 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] resizing rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 23 05:03:50 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:50.218 2 INFO neutron.agent.securitygroups_rpc [req-dc022d26-398e-4427-8b9e-d6e32e3174fc req-12998a72-36e8-4adc-96fc-04c6618198f0 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group member updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.241 281617 DEBUG nova.objects.instance [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.259 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.260 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Ensure instance console log exists: /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.260 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.261 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.261 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:50.295 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:49Z, description=, device_id=0878698a-ffc9-486f-96bf-d5a905dca1b1, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a77b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b7aa30>], id=cde9c0d4-e623-4543-a691-b11d78d0521b, ip_allocation=immediate, mac_address=fa:16:3e:5f:05:71, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:02Z, description=, dns_domain=, id=27537d61-8ae5-47a8-b217-f913cbb83ef7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1492741888-network, port_security_enabled=True, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62870, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2143, status=ACTIVE, subnets=['03b71d3e-a613-43b4-8aab-cffb0ff2b328'], tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:04Z, vlan_transparent=None, network_id=27537d61-8ae5-47a8-b217-f913cbb83ef7, port_security_enabled=True, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d77fc436-3ab1-42e0-a52b-861d18fcc237'], standard_attr_id=2383, status=DOWN, tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:50Z on network 27537d61-8ae5-47a8-b217-f913cbb83ef7#033[00m
Nov 23 05:03:50 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 2 addresses
Nov 23 05:03:50 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:03:50 localhost podman[318690]: 2025-11-23 10:03:50.509136766 +0000 UTC m=+0.060727067 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:50 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.574 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Successfully created port: cde9c0d4-e623-4543-a691-b11d78d0521b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 23 05:03:50 localhost nova_compute[281613]: 2025-11-23 10:03:50.691 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:50 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:50.768 262721 INFO neutron.agent.dhcp.agent [None req-8a948a0f-7124-4019-8990-ecc2133e36c9 - - - - - -] DHCP configuration for ports {'cde9c0d4-e623-4543-a691-b11d78d0521b'} is completed#033[00m
Nov 23 05:03:51 localhost dnsmasq[318285]: exiting on receipt of SIGTERM
Nov 23 05:03:51 localhost podman[318728]: 2025-11-23 10:03:51.189422309 +0000 UTC m=+0.059147895 container kill fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:03:51 localhost systemd[1]: libpod-fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d.scope: Deactivated successfully.
Nov 23 05:03:51 localhost podman[318742]: 2025-11-23 10:03:51.246039754 +0000 UTC m=+0.046191445 container died fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:51 localhost podman[318742]: 2025-11-23 10:03:51.33342703 +0000 UTC m=+0.133578691 container cleanup fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:51 localhost systemd[1]: libpod-conmon-fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d.scope: Deactivated successfully.
Nov 23 05:03:51 localhost podman[318749]: 2025-11-23 10:03:51.357438057 +0000 UTC m=+0.145320568 container remove fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-660f6865-bd58-4109-8f41-861e77e9eadf, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:03:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:51.381 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005532586.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:49Z, description=, device_id=0878698a-ffc9-486f-96bf-d5a905dca1b1, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b1f2e0>], dns_domain=, dns_name=tempest-volumesbackupstest-instance-1770644392, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b1fd00>], id=cde9c0d4-e623-4543-a691-b11d78d0521b, ip_allocation=immediate, mac_address=fa:16:3e:5f:05:71, name=, network_id=27537d61-8ae5-47a8-b217-f913cbb83ef7, port_security_enabled=True, project_id=49ebd7a691dd4ea59ffbe9f5703e77e4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['d77fc436-3ab1-42e0-a52b-861d18fcc237'], standard_attr_id=2383, status=DOWN, tags=[], tenant_id=49ebd7a691dd4ea59ffbe9f5703e77e4, updated_at=2025-11-23T10:03:51Z on network 27537d61-8ae5-47a8-b217-f913cbb83ef7#033[00m
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.428 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:51.429 262721 INFO neutron.agent.dhcp.agent [None req-44be592f-5fcb-43ef-85c4-fecce75b3545 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:51 localhost systemd[1]: var-lib-containers-storage-overlay-80ee11b22e1da3b42cd6facbc349bdade46ecda87108f1c86c4b378fc9476071-merged.mount: Deactivated successfully.
Nov 23 05:03:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa0d866deadeca53bd2cb7d759e98d3a932731b03948a7f1f3f72c346db3067d-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:51 localhost systemd[1]: run-netns-qdhcp\x2d660f6865\x2dbd58\x2d4109\x2d8f41\x2d861e77e9eadf.mount: Deactivated successfully.
Nov 23 05:03:51 localhost podman[318785]: 2025-11-23 10:03:51.62957732 +0000 UTC m=+0.079143043 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:51 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 2 addresses
Nov 23 05:03:51 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:03:51 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:03:51 localhost systemd[1]: tmp-crun.2p1TQC.mount: Deactivated successfully.
Nov 23 05:03:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:51.727 262721 INFO neutron.agent.linux.ip_lib [None req-71d87a53-0c51-4b1d-ad95-f534dd09e957 - - - - - -] Device tapb3fa72fb-ce cannot be used as it has no MAC address#033[00m
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.764 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost kernel: device tapb3fa72fb-ce entered promiscuous mode
Nov 23 05:03:51 localhost NetworkManager[5990]: <info>  [1763892231.7727] manager: (tapb3fa72fb-ce): new Generic device (/org/freedesktop/NetworkManager/Devices/38)
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.773 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost ovn_controller[153786]: 2025-11-23T10:03:51Z|00179|binding|INFO|Claiming lport b3fa72fb-ce13-4f0e-bb51-741d25c150af for this chassis.
Nov 23 05:03:51 localhost ovn_controller[153786]: 2025-11-23T10:03:51Z|00180|binding|INFO|b3fa72fb-ce13-4f0e-bb51-741d25c150af: Claiming unknown
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.782 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost systemd-udevd[318815]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:51.791 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-74771176-a098-4cd6-a32a-0a135a778efe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74771176-a098-4cd6-a32a-0a135a778efe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abdbcaab-c4a6-41bc-809b-8994b37cb80e, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=b3fa72fb-ce13-4f0e-bb51-741d25c150af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:51.793 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b3fa72fb-ce13-4f0e-bb51-741d25c150af in datapath 74771176-a098-4cd6-a32a-0a135a778efe bound to our chassis#033[00m
Nov 23 05:03:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:51.794 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 74771176-a098-4cd6-a32a-0a135a778efe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:03:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:51.795 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb3243a-ee0d-4651-a671-628f5c4fcf46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost ovn_controller[153786]: 2025-11-23T10:03:51Z|00181|binding|INFO|Setting lport b3fa72fb-ce13-4f0e-bb51-741d25c150af ovn-installed in OVS
Nov 23 05:03:51 localhost ovn_controller[153786]: 2025-11-23T10:03:51Z|00182|binding|INFO|Setting lport b3fa72fb-ce13-4f0e-bb51-741d25c150af up in Southbound
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.814 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost journal[229736]: ethtool ioctl error on tapb3fa72fb-ce: No such device
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.870 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:51.891 262721 INFO neutron.agent.dhcp.agent [None req-9aa53ebd-e1cf-4d72-90ab-485ab7977666 - - - - - -] DHCP configuration for ports {'cde9c0d4-e623-4543-a691-b11d78d0521b'} is completed#033[00m
Nov 23 05:03:51 localhost nova_compute[281613]: 2025-11-23 10:03:51.907 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:51 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:51.957 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: ERROR   10:03:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: ERROR   10:03:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: ERROR   10:03:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: ERROR   10:03:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: ERROR   10:03:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:03:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.454 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.583 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Successfully updated port: cde9c0d4-e623-4543-a691-b11d78d0521b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.607 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.608 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquired lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.608 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.767 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.835 281617 DEBUG nova.compute.manager [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-changed-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.835 281617 DEBUG nova.compute.manager [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Refreshing instance network info cache due to event network-changed-cde9c0d4-e623-4543-a691-b11d78d0521b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 05:03:52 localhost nova_compute[281613]: 2025-11-23 10:03:52.837 281617 DEBUG oslo_concurrency.lockutils [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 05:03:52 localhost podman[318888]: 
Nov 23 05:03:52 localhost podman[318888]: 2025-11-23 10:03:52.911583289 +0000 UTC m=+0.100276783 container create c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:03:52 localhost systemd[1]: Started libpod-conmon-c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347.scope.
Nov 23 05:03:52 localhost podman[318888]: 2025-11-23 10:03:52.864749587 +0000 UTC m=+0.053443111 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:03:52 localhost systemd[1]: tmp-crun.BlCPFm.mount: Deactivated successfully.
Nov 23 05:03:52 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:52.983 2 INFO neutron.agent.securitygroups_rpc [None req-e884606d-3955-464b-8443-536f305941fb 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:53 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e70e3d0aa8e5b305409862e3835f1eb78be7b6e058763c80130a75bfc79791/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:53 localhost podman[318888]: 2025-11-23 10:03:53.017453453 +0000 UTC m=+0.206146947 container init c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:53 localhost podman[318888]: 2025-11-23 10:03:53.026187778 +0000 UTC m=+0.214881272 container start c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:53 localhost dnsmasq[318907]: started, version 2.85 cachesize 150
Nov 23 05:03:53 localhost dnsmasq[318907]: DNS service limited to local subnets
Nov 23 05:03:53 localhost dnsmasq[318907]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:03:53 localhost dnsmasq[318907]: warning: no upstream servers configured
Nov 23 05:03:53 localhost dnsmasq-dhcp[318907]: DHCP, static leases only on 10.100.0.0, lease time 1d
Nov 23 05:03:53 localhost dnsmasq[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/addn_hosts - 0 addresses
Nov 23 05:03:53 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/host
Nov 23 05:03:53 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/opts
Nov 23 05:03:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:53.101 262721 INFO neutron.agent.dhcp.agent [None req-f4e42c71-416e-4038-9ec5-3afcad1176e6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:52Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a6e310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b1fb80>], id=5d10f104-13d7-4df8-93b8-294b0e683296, ip_allocation=immediate, mac_address=fa:16:3e:bf:ab:a6, name=tempest-PortsTestJSON-109073341, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T10:03:48Z, description=, dns_domain=, id=74771176-a098-4cd6-a32a-0a135a778efe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-662481777, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25009, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2380, status=ACTIVE, subnets=['04274149-ce30-415f-93c5-681d92f7623d'], tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:03:50Z, vlan_transparent=None, network_id=74771176-a098-4cd6-a32a-0a135a778efe, port_security_enabled=True, project_id=6eb850a1541d4942b249428ef6092e5e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['cadd5356-9a8d-419a-ac04-589c2522a695'], standard_attr_id=2391, status=DOWN, tags=[], tenant_id=6eb850a1541d4942b249428ef6092e5e, updated_at=2025-11-23T10:03:52Z on network 74771176-a098-4cd6-a32a-0a135a778efe#033[00m
Nov 23 05:03:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:53.241 262721 INFO neutron.agent.dhcp.agent [None req-9f89fd63-1629-4ae9-9f11-5dd338c7b69a - - - - - -] DHCP configuration for ports {'fc7a07cf-a86f-4fb8-9142-978ad49d73d3'} is completed#033[00m
Nov 23 05:03:53 localhost dnsmasq[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/addn_hosts - 1 addresses
Nov 23 05:03:53 localhost podman[318923]: 2025-11-23 10:03:53.358561934 +0000 UTC m=+0.060739457 container kill c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:03:53 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/host
Nov 23 05:03:53 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/opts
Nov 23 05:03:53 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:53.632 262721 INFO neutron.agent.dhcp.agent [None req-19e435bd-1d92-4c71-b86c-2f00f59a4a46 - - - - - -] DHCP configuration for ports {'5d10f104-13d7-4df8-93b8-294b0e683296'} is completed#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.861 281617 DEBUG nova.network.neutron [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating instance_info_cache with network_info: [{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.887 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Releasing lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.888 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Instance network_info: |[{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.889 281617 DEBUG oslo_concurrency.lockutils [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.889 281617 DEBUG nova.network.neutron [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Refreshing network info cache for port cde9c0d4-e623-4543-a691-b11d78d0521b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.894 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Start _get_guest_xml network_info=[{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=<?>,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T09:56:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'guest_format': None, 'boot_index': 0, 'image_id': 'c5806483-57a8-4254-b41b-254b888c8606'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.908 281617 WARNING nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.912 281617 DEBUG nova.virt.libvirt.host [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Searching host: 'np0005532586.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.912 281617 DEBUG nova.virt.libvirt.host [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.917 281617 DEBUG nova.virt.libvirt.host [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Searching host: 'np0005532586.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.918 281617 DEBUG nova.virt.libvirt.host [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.919 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.919 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-23T09:56:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43b374b4-75d9-47f9-aa6b-ddb1a45f7c04',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-23T09:56:45Z,direct_url=<?>,disk_format='qcow2',id=c5806483-57a8-4254-b41b-254b888c8606,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='1915d3e5d4254231a0517e2dcf35848f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-23T09:56:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.920 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.920 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.920 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.921 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.921 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.921 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.922 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.922 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.922 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.923 281617 DEBUG nova.virt.hardware [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 23 05:03:53 localhost nova_compute[281613]: 2025-11-23 10:03:53.927 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e161 e161: 6 total, 6 up, 6 in
Nov 23 05:03:54 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:54.183 2 INFO neutron.agent.securitygroups_rpc [None req-5a886eea-af54-4cb9-a980-5c3836eff3f1 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:03:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:54.261 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:03:53Z, description=, device_id=3a855c67-f6cd-40d2-a969-f73f52c0332b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790afb6a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a99400>], id=c5cb92b0-9199-4a96-ada4-0330bbb03eb0, ip_allocation=immediate, mac_address=fa:16:3e:99:e2:04, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2393, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:03:53Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:03:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:03:54 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/95359321' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:03:54 localhost nova_compute[281613]: 2025-11-23 10:03:54.495 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:54 localhost systemd[1]: tmp-crun.Ajxxgz.mount: Deactivated successfully.
Nov 23 05:03:54 localhost dnsmasq[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/addn_hosts - 0 addresses
Nov 23 05:03:54 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/host
Nov 23 05:03:54 localhost dnsmasq-dhcp[318907]: read /var/lib/neutron/dhcp/74771176-a098-4cd6-a32a-0a135a778efe/opts
Nov 23 05:03:54 localhost podman[318994]: 2025-11-23 10:03:54.503937361 +0000 UTC m=+0.087632842 container kill c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:54 localhost podman[319008]: 2025-11-23 10:03:54.542430209 +0000 UTC m=+0.071876729 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:03:54 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:03:54 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:03:54 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:03:54 localhost nova_compute[281613]: 2025-11-23 10:03:54.546 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:54 localhost nova_compute[281613]: 2025-11-23 10:03:54.561 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:03:54 localhost systemd[1]: tmp-crun.m7FHGL.mount: Deactivated successfully.
Nov 23 05:03:54 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:54.957 262721 INFO neutron.agent.dhcp.agent [None req-9332819a-8f36-4006-af57-29c737bb6b25 - - - - - -] DHCP configuration for ports {'c5cb92b0-9199-4a96-ada4-0330bbb03eb0'} is completed#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.036 281617 DEBUG nova.network.neutron [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updated VIF entry in instance network info cache for port cde9c0d4-e623-4543-a691-b11d78d0521b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.037 281617 DEBUG nova.network.neutron [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating instance_info_cache with network_info: [{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 05:03:55 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:03:55 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4204955573' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.067032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235067077, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1463, "num_deletes": 262, "total_data_size": 2778614, "memory_usage": 2889288, "flush_reason": "Manual Compaction"}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235078556, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1826635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22847, "largest_seqno": 24305, "table_properties": {"data_size": 1820430, "index_size": 3419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14676, "raw_average_key_size": 21, "raw_value_size": 1807542, "raw_average_value_size": 2669, "num_data_blocks": 148, "num_entries": 677, "num_filter_entries": 677, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892168, "oldest_key_time": 1763892168, "file_creation_time": 1763892235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 11577 microseconds, and 5326 cpu microseconds.
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.079 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.078608) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1826635 bytes OK
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.078633) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080407) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080432) EVENT_LOG_v1 {"time_micros": 1763892235080426, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.080453) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2771447, prev total WAL file size 2771447, number of live WAL files 2.
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.081503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end)
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1783KB)], [39(14MB)]
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235081623, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16754564, "oldest_snapshot_seqno": -1}
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.081 281617 DEBUG nova.virt.libvirt.vif [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T10:03:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1770644392',display_name='tempest-VolumesBackupsTest-instance-1770644392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-volumesbackupstest-instance-1770644392',id=11,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbMtM0rpJqnclcNLBgk4KSEiwJyDxhOOxIkPhgwzgS7K/A/uLo6bfQU35ro/p2iaEURDpm+5ppielVp65dcBV9RePqVaIQh7oGJXrXOd4KXkh5g6iCx7O9S9TqrSJ5aCA==',key_name='tempest-keypair-269458161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49ebd7a691dd4ea59ffbe9f5703e77e4',ramdisk_id='',reservation_id='r-ohu6ssq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1213662085',owner_user_name='tempest-VolumesBackupsTest-1213662085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T10:03:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5f7e9736cbc74ce4ac3de51c4ac84504',uuid=0878698a-ffc9-486f-96bf-d5a905dca1b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.082 281617 DEBUG nova.network.os_vif_util [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converting VIF {"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.083 281617 DEBUG nova.network.os_vif_util [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.085 281617 DEBUG nova.objects.instance [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.113 281617 DEBUG oslo_concurrency.lockutils [req-42cc57ef-f1ca-4be0-ad5e-3497ee5a8fec req-a117ea5f-84c2-4ec4-9c8a-1b7f189adf7c b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.118 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] End _get_guest_xml xml=<domain type="kvm">
Nov 23 05:03:55 localhost nova_compute[281613]:  <uuid>0878698a-ffc9-486f-96bf-d5a905dca1b1</uuid>
Nov 23 05:03:55 localhost nova_compute[281613]:  <name>instance-0000000b</name>
Nov 23 05:03:55 localhost nova_compute[281613]:  <memory>131072</memory>
Nov 23 05:03:55 localhost nova_compute[281613]:  <vcpu>1</vcpu>
Nov 23 05:03:55 localhost nova_compute[281613]:  <metadata>
Nov 23 05:03:55 localhost nova_compute[281613]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:name>tempest-VolumesBackupsTest-instance-1770644392</nova:name>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:creationTime>2025-11-23 10:03:53</nova:creationTime>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:flavor name="m1.nano">
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:memory>128</nova:memory>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:disk>1</nova:disk>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:swap>0</nova:swap>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:ephemeral>0</nova:ephemeral>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:vcpus>1</nova:vcpus>
Nov 23 05:03:55 localhost nova_compute[281613]:      </nova:flavor>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:owner>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:user uuid="5f7e9736cbc74ce4ac3de51c4ac84504">tempest-VolumesBackupsTest-1213662085-project-member</nova:user>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:project uuid="49ebd7a691dd4ea59ffbe9f5703e77e4">tempest-VolumesBackupsTest-1213662085</nova:project>
Nov 23 05:03:55 localhost nova_compute[281613]:      </nova:owner>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:root type="image" uuid="c5806483-57a8-4254-b41b-254b888c8606"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <nova:ports>
Nov 23 05:03:55 localhost nova_compute[281613]:        <nova:port uuid="cde9c0d4-e623-4543-a691-b11d78d0521b">
Nov 23 05:03:55 localhost nova_compute[281613]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 23 05:03:55 localhost nova_compute[281613]:        </nova:port>
Nov 23 05:03:55 localhost nova_compute[281613]:      </nova:ports>
Nov 23 05:03:55 localhost nova_compute[281613]:    </nova:instance>
Nov 23 05:03:55 localhost nova_compute[281613]:  </metadata>
Nov 23 05:03:55 localhost nova_compute[281613]:  <sysinfo type="smbios">
Nov 23 05:03:55 localhost nova_compute[281613]:    <system>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="manufacturer">RDO</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="product">OpenStack Compute</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="serial">0878698a-ffc9-486f-96bf-d5a905dca1b1</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="uuid">0878698a-ffc9-486f-96bf-d5a905dca1b1</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:      <entry name="family">Virtual Machine</entry>
Nov 23 05:03:55 localhost nova_compute[281613]:    </system>
Nov 23 05:03:55 localhost nova_compute[281613]:  </sysinfo>
Nov 23 05:03:55 localhost nova_compute[281613]:  <os>
Nov 23 05:03:55 localhost nova_compute[281613]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 23 05:03:55 localhost nova_compute[281613]:    <boot dev="hd"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <smbios mode="sysinfo"/>
Nov 23 05:03:55 localhost nova_compute[281613]:  </os>
Nov 23 05:03:55 localhost nova_compute[281613]:  <features>
Nov 23 05:03:55 localhost nova_compute[281613]:    <acpi/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <apic/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <vmcoreinfo/>
Nov 23 05:03:55 localhost nova_compute[281613]:  </features>
Nov 23 05:03:55 localhost nova_compute[281613]:  <clock offset="utc">
Nov 23 05:03:55 localhost nova_compute[281613]:    <timer name="pit" tickpolicy="delay"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <timer name="hpet" present="no"/>
Nov 23 05:03:55 localhost nova_compute[281613]:  </clock>
Nov 23 05:03:55 localhost nova_compute[281613]:  <cpu mode="host-model" match="exact">
Nov 23 05:03:55 localhost nova_compute[281613]:    <topology sockets="1" cores="1" threads="1"/>
Nov 23 05:03:55 localhost nova_compute[281613]:  </cpu>
Nov 23 05:03:55 localhost nova_compute[281613]:  <devices>
Nov 23 05:03:55 localhost nova_compute[281613]:    <disk type="network" device="disk">
Nov 23 05:03:55 localhost nova_compute[281613]:      <driver type="raw" cache="none"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <source protocol="rbd" name="vms/0878698a-ffc9-486f-96bf-d5a905dca1b1_disk">
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.103" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.104" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.105" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      </source>
Nov 23 05:03:55 localhost nova_compute[281613]:      <auth username="openstack">
Nov 23 05:03:55 localhost nova_compute[281613]:        <secret type="ceph" uuid="46550e70-79cb-5f55-bf6d-1204b97e083b"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      </auth>
Nov 23 05:03:55 localhost nova_compute[281613]:      <target dev="vda" bus="virtio"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </disk>
Nov 23 05:03:55 localhost nova_compute[281613]:    <disk type="network" device="cdrom">
Nov 23 05:03:55 localhost nova_compute[281613]:      <driver type="raw" cache="none"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <source protocol="rbd" name="vms/0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config">
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.103" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.104" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:        <host name="172.18.0.105" port="6789"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      </source>
Nov 23 05:03:55 localhost nova_compute[281613]:      <auth username="openstack">
Nov 23 05:03:55 localhost nova_compute[281613]:        <secret type="ceph" uuid="46550e70-79cb-5f55-bf6d-1204b97e083b"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      </auth>
Nov 23 05:03:55 localhost nova_compute[281613]:      <target dev="sda" bus="sata"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </disk>
Nov 23 05:03:55 localhost nova_compute[281613]:    <interface type="ethernet">
Nov 23 05:03:55 localhost nova_compute[281613]:      <mac address="fa:16:3e:5f:05:71"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <model type="virtio"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <driver name="vhost" rx_queue_size="512"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <mtu size="1442"/>
Nov 23 05:03:55 localhost nova_compute[281613]:      <target dev="tapcde9c0d4-e6"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </interface>
Nov 23 05:03:55 localhost nova_compute[281613]:    <serial type="pty">
Nov 23 05:03:55 localhost nova_compute[281613]:      <log file="/var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/console.log" append="off"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </serial>
Nov 23 05:03:55 localhost nova_compute[281613]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <video>
Nov 23 05:03:55 localhost nova_compute[281613]:      <model type="virtio"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </video>
Nov 23 05:03:55 localhost nova_compute[281613]:    <input type="tablet" bus="usb"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <rng model="virtio">
Nov 23 05:03:55 localhost nova_compute[281613]:      <backend model="random">/dev/urandom</backend>
Nov 23 05:03:55 localhost nova_compute[281613]:    </rng>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="pci" model="pcie-root-port"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <controller type="usb" index="0"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    <memballoon model="virtio">
Nov 23 05:03:55 localhost nova_compute[281613]:      <stats period="10"/>
Nov 23 05:03:55 localhost nova_compute[281613]:    </memballoon>
Nov 23 05:03:55 localhost nova_compute[281613]:  </devices>
Nov 23 05:03:55 localhost nova_compute[281613]: </domain>
Nov 23 05:03:55 localhost nova_compute[281613]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.119 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Preparing to wait for external event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.120 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.120 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.121 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.123 281617 DEBUG nova.virt.libvirt.vif [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-23T10:03:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1770644392',display_name='tempest-VolumesBackupsTest-instance-1770644392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-volumesbackupstest-instance-1770644392',id=11,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbMtM0rpJqnclcNLBgk4KSEiwJyDxhOOxIkPhgwzgS7K/A/uLo6bfQU35ro/p2iaEURDpm+5ppielVp65dcBV9RePqVaIQh7oGJXrXOd4KXkh5g6iCx7O9S9TqrSJ5aCA==',key_name='tempest-keypair-269458161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='49ebd7a691dd4ea59ffbe9f5703e77e4',ramdisk_id='',reservation_id='r-ohu6ssq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1213662085',owner_user_name='tempest-VolumesBackupsTest-1213662085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-23T10:03:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5f7e9736cbc74ce4ac3de51c4ac84504',uuid=0878698a-ffc9-486f-96bf-d5a905dca1b1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.123 281617 DEBUG nova.network.os_vif_util [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converting VIF {"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.125 281617 DEBUG nova.network.os_vif_util [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.125 281617 DEBUG os_vif [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.126 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.127 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.127 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.131 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.131 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcde9c0d4-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.132 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcde9c0d4-e6, col_values=(('external_ids', {'iface-id': 'cde9c0d4-e623-4543-a691-b11d78d0521b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:05:71', 'vm-uuid': '0878698a-ffc9-486f-96bf-d5a905dca1b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.134 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.136 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.140 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.141 281617 INFO os_vif [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6')#033[00m
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12591 keys, 15502514 bytes, temperature: kUnknown
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235160769, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 15502514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15433666, "index_size": 36299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 340990, "raw_average_key_size": 27, "raw_value_size": 15221789, "raw_average_value_size": 1208, "num_data_blocks": 1345, "num_entries": 12591, "num_filter_entries": 12591, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.161191) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15502514 bytes
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.162746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.1 rd, 195.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 14.2 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(17.7) write-amplify(8.5) OK, records in: 13130, records dropped: 539 output_compression: NoCompression
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.162766) EVENT_LOG_v1 {"time_micros": 1763892235162755, "job": 22, "event": "compaction_finished", "compaction_time_micros": 79359, "compaction_time_cpu_micros": 47232, "output_level": 6, "num_output_files": 1, "total_output_size": 15502514, "num_input_records": 13130, "num_output_records": 12591, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235163050, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892235164222, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.081378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.164308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.164318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.164322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.164325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:03:55.164328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.209 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.209 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.209 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No VIF found with MAC fa:16:3e:5f:05:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.210 281617 INFO nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Using config drive#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.250 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.533 281617 INFO nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Creating config drive at /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.540 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfxq2gfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.668 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkfxq2gfm" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.708 281617 DEBUG nova.storage.rbd_utils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] rbd image 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.712 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.729 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost dnsmasq[318907]: exiting on receipt of SIGTERM
Nov 23 05:03:55 localhost podman[319154]: 2025-11-23 10:03:55.911025171 +0000 UTC m=+0.074082838 container kill c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:03:55 localhost systemd[1]: libpod-c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347.scope: Deactivated successfully.
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.944 281617 DEBUG oslo_concurrency.processutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config 0878698a-ffc9-486f-96bf-d5a905dca1b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.945 281617 INFO nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Deleting local config drive /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1/disk.config because it was imported into RBD.#033[00m
Nov 23 05:03:55 localhost systemd[1]: Started libvirt secret daemon.
Nov 23 05:03:55 localhost ovn_controller[153786]: 2025-11-23T10:03:55Z|00183|binding|INFO|Removing iface tapb3fa72fb-ce ovn-installed in OVS
Nov 23 05:03:55 localhost podman[319171]: 2025-11-23 10:03:55.989727162 +0000 UTC m=+0.060784239 container died c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:03:55 localhost ovn_controller[153786]: 2025-11-23T10:03:55Z|00184|binding|INFO|Removing lport b3fa72fb-ce13-4f0e-bb51-741d25c150af ovn-installed in OVS
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.991 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:55 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:55.992 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 3b4f8f9e-f661-475c-9c18-4e961969b05d with type ""#033[00m
Nov 23 05:03:55 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:55.993 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-74771176-a098-4cd6-a32a-0a135a778efe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74771176-a098-4cd6-a32a-0a135a778efe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abdbcaab-c4a6-41bc-809b-8994b37cb80e, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=b3fa72fb-ce13-4f0e-bb51-741d25c150af) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:55 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:55.994 159429 INFO neutron.agent.ovn.metadata.agent [-] Port b3fa72fb-ce13-4f0e-bb51-741d25c150af in datapath 74771176-a098-4cd6-a32a-0a135a778efe unbound from our chassis#033[00m
Nov 23 05:03:55 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:55.995 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74771176-a098-4cd6-a32a-0a135a778efe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:03:55 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:55.996 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5daa79-1042-4ffd-96f1-816d2280ce73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:55 localhost nova_compute[281613]: 2025-11-23 10:03:55.999 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost systemd[1]: tmp-crun.iSzSV0.mount: Deactivated successfully.
Nov 23 05:03:56 localhost podman[319171]: 2025-11-23 10:03:56.036573364 +0000 UTC m=+0.107630371 container cleanup c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:03:56 localhost systemd[1]: libpod-conmon-c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347.scope: Deactivated successfully.
Nov 23 05:03:56 localhost NetworkManager[5990]: <info>  [1763892236.0431] manager: (tapcde9c0d4-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 23 05:03:56 localhost kernel: device tapcde9c0d4-e6 entered promiscuous mode
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.045 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00185|binding|INFO|Claiming lport cde9c0d4-e623-4543-a691-b11d78d0521b for this chassis.
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00186|binding|INFO|cde9c0d4-e623-4543-a691-b11d78d0521b: Claiming fa:16:3e:5f:05:71 10.100.0.10
Nov 23 05:03:56 localhost systemd-udevd[319223]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.055 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:05:71 10.100.0.10'], port_security=['fa:16:3e:5f:05:71 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0878698a-ffc9-486f-96bf-d5a905dca1b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49ebd7a691dd4ea59ffbe9f5703e77e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd77fc436-3ab1-42e0-a52b-861d18fcc237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e956203-ccec-4e0d-b2cd-a19e87dc158b, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=cde9c0d4-e623-4543-a691-b11d78d0521b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.056 159429 INFO neutron.agent.ovn.metadata.agent [-] Port cde9c0d4-e623-4543-a691-b11d78d0521b in datapath 27537d61-8ae5-47a8-b217-f913cbb83ef7 bound to our chassis#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.058 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 537b5058-8e61-45ae-8f59-38dbe5e61006 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.058 159429 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 27537d61-8ae5-47a8-b217-f913cbb83ef7#033[00m
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00187|binding|INFO|Setting lport cde9c0d4-e623-4543-a691-b11d78d0521b ovn-installed in OVS
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00188|binding|INFO|Setting lport cde9c0d4-e623-4543-a691-b11d78d0521b up in Southbound
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.064 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost NetworkManager[5990]: <info>  [1763892236.0735] device (tapcde9c0d4-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Nov 23 05:03:56 localhost NetworkManager[5990]: <info>  [1763892236.0742] device (tapcde9c0d4-e6): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Nov 23 05:03:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e162 e162: 6 total, 6 up, 6 in
Nov 23 05:03:56 localhost podman[319173]: 2025-11-23 10:03:56.078834594 +0000 UTC m=+0.143948621 container remove c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-74771176-a098-4cd6-a32a-0a135a778efe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.075 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7b3ecc-1bac-4e9b-b02c-276f054df33b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.076 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap27537d61-81 in ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.077 262865 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap27537d61-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.077 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[aea458a6-c6c8-4e6e-8655-4e1d844b21e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.078 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b40b4a-7476-49b9-82fc-9cd6553417d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost kernel: device tapb3fa72fb-ce left promiscuous mode
Nov 23 05:03:56 localhost systemd-machined[203166]: New machine qemu-2-instance-0000000b.
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.096 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.098 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[310a6f1a-3ea4-4268-9e96-9e56675d897e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost systemd[1]: Started Virtual Machine qemu-2-instance-0000000b.
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.112 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.112 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebbaff5-9264-43fc-a4a8-597af8d8a1e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:56.136 262721 INFO neutron.agent.dhcp.agent [None req-04635b85-c17f-4527-9bed-a5dfb8943cd1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.144 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[a5636e9f-5ced-4a14-8123-2a70b26b944d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost NetworkManager[5990]: <info>  [1763892236.1514] manager: (tap27537d61-80): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.150 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[9a48eb83-fa32-4340-86e2-23dd427ef42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.181 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[0086b023-5824-4f67-805c-f8195a830341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.185 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[c19fa487-5ca0-416e-a23c-be7179e46e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap27537d61-81: link becomes ready
Nov 23 05:03:56 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap27537d61-80: link becomes ready
Nov 23 05:03:56 localhost NetworkManager[5990]: <info>  [1763892236.2161] device (tap27537d61-80): carrier: link connected
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.220 308993 DEBUG oslo.privsep.daemon [-] privsep: reply[531477a7-c466-4ed6-b1de-7eefb8732b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.238 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[a07a29ca-6c94-42d3-8478-542fb627d2a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27537d61-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:98:e2:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1217937, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319270, 'error': None, 'target': 'ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.255 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[436fd51f-4093-4182-ac46-48fb54e37352]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:e223'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1217937, 'tstamp': 1217937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319279, 'error': None, 'target': 'ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.272 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb31d98-a101-4a96-afa7-10d6b0dc44de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap27537d61-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:98:e2:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1217937, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319281, 'error': None, 'target': 'ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.299 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[50dfb09a-1bf7-47d6-9994-537c79dee831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.364 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[093cfd58-6090-4947-a706-b180ac1f9de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.366 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27537d61-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.367 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.368 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27537d61-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.371 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost kernel: device tap27537d61-80 entered promiscuous mode
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.375 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap27537d61-80, col_values=(('external_ids', {'iface-id': 'fbcc667a-03be-4e7a-b7ea-70d45337df41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00189|binding|INFO|Releasing lport fbcc667a-03be-4e7a-b7ea-70d45337df41 from this chassis (sb_readonly=0)
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.388 159429 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/27537d61-8ae5-47a8-b217-f913cbb83ef7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/27537d61-8ae5-47a8-b217-f913cbb83ef7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.388 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.390 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f090774d-096f-4b59-8847-5a69b7561302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.391 159429 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: global
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    log         /dev/log local0 debug
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    log-tag     haproxy-metadata-proxy-27537d61-8ae5-47a8-b217-f913cbb83ef7
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    user        root
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    group       root
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    maxconn     1024
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    pidfile     /var/lib/neutron/external/pids/27537d61-8ae5-47a8-b217-f913cbb83ef7.pid.haproxy
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    daemon
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: defaults
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    log global
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    mode http
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    option httplog
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    option dontlognull
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    option http-server-close
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    option forwardfor
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    retries                 3
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    timeout http-request    30s
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    timeout connect         30s
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    timeout client          32s
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    timeout server          32s
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    timeout http-keep-alive 30s
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: listen listener
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    bind 169.254.169.254:80
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    server metadata /var/lib/neutron/metadata_proxy
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]:    http-request add-header X-OVN-Network-ID 27537d61-8ae5-47a8-b217-f913cbb83ef7
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 23 05:03:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:03:56.393 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:03:56 localhost ovn_metadata_agent[159423]: 2025-11-23 10:03:56.394 159429 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'env', 'PROCESS_TAG=haproxy-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/27537d61-8ae5-47a8-b217-f913cbb83ef7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.440 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763892236.4389453, 0878698a-ffc9-486f-96bf-d5a905dca1b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.440 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] VM Started (Lifecycle Event)#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.476 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.481 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763892236.4448507, 0878698a-ffc9-486f-96bf-d5a905dca1b1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.482 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] VM Paused (Lifecycle Event)#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.504 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.509 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.536 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.653 281617 DEBUG nova.compute.manager [req-4f050f5b-bb81-44fa-aa7a-417325f49b07 req-332c32fb-fb53-463e-a64f-65fbe4295b01 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.654 281617 DEBUG oslo_concurrency.lockutils [req-4f050f5b-bb81-44fa-aa7a-417325f49b07 req-332c32fb-fb53-463e-a64f-65fbe4295b01 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.655 281617 DEBUG oslo_concurrency.lockutils [req-4f050f5b-bb81-44fa-aa7a-417325f49b07 req-332c32fb-fb53-463e-a64f-65fbe4295b01 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.655 281617 DEBUG oslo_concurrency.lockutils [req-4f050f5b-bb81-44fa-aa7a-417325f49b07 req-332c32fb-fb53-463e-a64f-65fbe4295b01 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.655 281617 DEBUG nova.compute.manager [req-4f050f5b-bb81-44fa-aa7a-417325f49b07 req-332c32fb-fb53-463e-a64f-65fbe4295b01 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Processing event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.656 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.662 281617 DEBUG nova.virt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Emitting event <LifecycleEvent: 1763892236.6610537, 0878698a-ffc9-486f-96bf-d5a905dca1b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.662 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] VM Resumed (Lifecycle Event)#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.664 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.667 281617 INFO nova.virt.libvirt.driver [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Instance spawned successfully.#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.667 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.688 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.692 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.692 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.693 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.693 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.693 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.694 281617 DEBUG nova.virt.libvirt.driver [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.697 281617 DEBUG nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.720 281617 INFO nova.compute.manager [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.754 281617 INFO nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Took 7.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.755 281617 DEBUG nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 05:03:56 localhost ovn_controller[153786]: 2025-11-23T10:03:56Z|00190|binding|INFO|Releasing lport fbcc667a-03be-4e7a-b7ea-70d45337df41 from this chassis (sb_readonly=0)
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.824 281617 INFO nova.compute.manager [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Took 8.92 seconds to build instance.#033[00m
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.825 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:03:56 localhost podman[319340]: 
Nov 23 05:03:56 localhost nova_compute[281613]: 2025-11-23 10:03:56.836 281617 DEBUG oslo_concurrency.lockutils [None req-dc022d26-398e-4427-8b9e-d6e32e3174fc 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:56 localhost podman[319340]: 2025-11-23 10:03:56.848639348 +0000 UTC m=+0.113723525 container create 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true)
Nov 23 05:03:56 localhost systemd[1]: Started libpod-conmon-6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c.scope.
Nov 23 05:03:56 localhost podman[319340]: 2025-11-23 10:03:56.793952815 +0000 UTC m=+0.059037062 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 23 05:03:56 localhost systemd[1]: Started libcrun container.
Nov 23 05:03:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bac47e1294208475b1d0f19cd49cdebfa567aa05ed938c6e779060efdac377a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:03:56 localhost podman[319340]: 2025-11-23 10:03:56.923176467 +0000 UTC m=+0.188260644 container init 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:03:56 localhost podman[319340]: 2025-11-23 10:03:56.931582364 +0000 UTC m=+0.196666551 container start 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:03:56 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [NOTICE]   (319358) : New worker (319360) forked
Nov 23 05:03:56 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [NOTICE]   (319358) : Loading success.
Nov 23 05:03:57 localhost systemd[1]: var-lib-containers-storage-overlay-17e70e3d0aa8e5b305409862e3835f1eb78be7b6e058763c80130a75bfc79791-merged.mount: Deactivated successfully.
Nov 23 05:03:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4b6d757bef4be3686d0b85d7a9985bf76d39710dd2a8081453d4bf0ff812347-userdata-shm.mount: Deactivated successfully.
Nov 23 05:03:57 localhost systemd[1]: run-netns-qdhcp\x2d74771176\x2da098\x2d4cd6\x2da32a\x2d0a135a778efe.mount: Deactivated successfully.
Nov 23 05:03:57 localhost nova_compute[281613]: 2025-11-23 10:03:57.796 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:03:57 localhost nova_compute[281613]: 2025-11-23 10:03:57.824 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Triggering sync for uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 23 05:03:57 localhost nova_compute[281613]: 2025-11-23 10:03:57.825 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:57 localhost nova_compute[281613]: 2025-11-23 10:03:57.825 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:57 localhost nova_compute[281613]: 2025-11-23 10:03:57.854 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.733 281617 DEBUG nova.compute.manager [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.734 281617 DEBUG oslo_concurrency.lockutils [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.734 281617 DEBUG oslo_concurrency.lockutils [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.734 281617 DEBUG oslo_concurrency.lockutils [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.735 281617 DEBUG nova.compute.manager [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] No waiting events found dispatching network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 05:03:58 localhost nova_compute[281613]: 2025-11-23 10:03:58.735 281617 WARNING nova.compute.manager [req-aaad96c1-a6f6-46dd-8f2c-c915c443f511 req-bc54c19b-6bf0-417d-8030-7b56c33a0471 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received unexpected event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b for instance with vm_state active and task_state None.#033[00m
Nov 23 05:03:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:03:59.097 2 INFO neutron.agent.securitygroups_rpc [None req-73f9f53a-edf4-45e5-a635-4120a726bffe f436a64c9a134831a0f528309f399f1d 807b835f4cc944269d2f71f8e519b08a - - default default] Security group member updated ['c2582f3e-b285-4f13-ba8c-38a0c5b47d8d']#033[00m
Nov 23 05:03:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:00 localhost nova_compute[281613]: 2025-11-23 10:04:00.136 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:00 localhost nova_compute[281613]: 2025-11-23 10:04:00.697 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:01 localhost nova_compute[281613]: 2025-11-23 10:04:01.105 281617 DEBUG nova.compute.manager [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-changed-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:04:01 localhost nova_compute[281613]: 2025-11-23 10:04:01.106 281617 DEBUG nova.compute.manager [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Refreshing instance network info cache due to event network-changed-cde9c0d4-e623-4543-a691-b11d78d0521b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 23 05:04:01 localhost nova_compute[281613]: 2025-11-23 10:04:01.107 281617 DEBUG oslo_concurrency.lockutils [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 05:04:01 localhost nova_compute[281613]: 2025-11-23 10:04:01.108 281617 DEBUG oslo_concurrency.lockutils [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquired lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 05:04:01 localhost nova_compute[281613]: 2025-11-23 10:04:01.108 281617 DEBUG nova.network.neutron [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Refreshing network info cache for port cde9c0d4-e623-4543-a691-b11d78d0521b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 23 05:04:01 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:04:01 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:04:01 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:04:01 localhost podman[319385]: 2025-11-23 10:04:01.366781357 +0000 UTC m=+0.078969589 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 23 05:04:02 localhost nova_compute[281613]: 2025-11-23 10:04:02.087 281617 DEBUG nova.network.neutron [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updated VIF entry in instance network info cache for port cde9c0d4-e623-4543-a691-b11d78d0521b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 23 05:04:02 localhost nova_compute[281613]: 2025-11-23 10:04:02.088 281617 DEBUG nova.network.neutron [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating instance_info_cache with network_info: [{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 05:04:02 localhost nova_compute[281613]: 2025-11-23 10:04:02.110 281617 DEBUG oslo_concurrency.lockutils [req-56e4ec35-8889-4e82-9f38-1acfac85c1bf req-3c851cae-1d8c-4e3d-ba66-22958fdcf36f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Releasing lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 05:04:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e163 e163: 6 total, 6 up, 6 in
Nov 23 05:04:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:05 localhost dnsmasq[316373]: exiting on receipt of SIGTERM
Nov 23 05:04:05 localhost podman[319423]: 2025-11-23 10:04:05.102702457 +0000 UTC m=+0.052064425 container kill 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:04:05 localhost systemd[1]: libpod-3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338.scope: Deactivated successfully.
Nov 23 05:04:05 localhost nova_compute[281613]: 2025-11-23 10:04:05.137 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:05 localhost podman[319436]: 2025-11-23 10:04:05.178426157 +0000 UTC m=+0.059638939 container died 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:04:05 localhost podman[319436]: 2025-11-23 10:04:05.215411544 +0000 UTC m=+0.096624276 container cleanup 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:04:05 localhost systemd[1]: libpod-conmon-3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338.scope: Deactivated successfully.
Nov 23 05:04:05 localhost podman[319438]: 2025-11-23 10:04:05.250076068 +0000 UTC m=+0.125358109 container remove 3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5266388-eb1d-4ef2-ac78-bf3856ad5b84, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 05:04:05 localhost nova_compute[281613]: 2025-11-23 10:04:05.260 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:05 localhost ovn_controller[153786]: 2025-11-23T10:04:05Z|00191|binding|INFO|Releasing lport 537283dc-3bf2-449b-bdff-690ff3ce6572 from this chassis (sb_readonly=0)
Nov 23 05:04:05 localhost kernel: device tap537283dc-3b left promiscuous mode
Nov 23 05:04:05 localhost ovn_controller[153786]: 2025-11-23T10:04:05Z|00192|binding|INFO|Setting lport 537283dc-3bf2-449b-bdff-690ff3ce6572 down in Southbound
Nov 23 05:04:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:05.271 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-e5266388-eb1d-4ef2-ac78-bf3856ad5b84', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5266388-eb1d-4ef2-ac78-bf3856ad5b84', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d0aa997cf0e428b8c7e20c806754329', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72717ea9-3adf-4245-b2b7-b38253c2d684, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=537283dc-3bf2-449b-bdff-690ff3ce6572) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:05.273 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 537283dc-3bf2-449b-bdff-690ff3ce6572 in datapath e5266388-eb1d-4ef2-ac78-bf3856ad5b84 unbound from our chassis#033[00m
Nov 23 05:04:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:05.274 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5266388-eb1d-4ef2-ac78-bf3856ad5b84, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:05 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:05.275 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[0e384512-72b5-438d-a5e6-b9c45bb0f59f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:05 localhost nova_compute[281613]: 2025-11-23 10:04:05.281 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:05.508 262721 INFO neutron.agent.dhcp.agent [None req-44f780de-4337-4911-8de4-4449822441bf - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:05 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:05.655 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:05 localhost nova_compute[281613]: 2025-11-23 10:04:05.700 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:06 localhost systemd[1]: var-lib-containers-storage-overlay-7e3d80d386c7859c4ac628dc8bcd70ab89e5c49226e1c2405aa920402c8e93f2-merged.mount: Deactivated successfully.
Nov 23 05:04:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cca48e1fcac64db4229192f7c8972f18e59e069bbf672db774a829f7d13a338-userdata-shm.mount: Deactivated successfully.
Nov 23 05:04:06 localhost systemd[1]: run-netns-qdhcp\x2de5266388\x2deb1d\x2d4ef2\x2dac78\x2dbf3856ad5b84.mount: Deactivated successfully.
Nov 23 05:04:06 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:06.352 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:07 localhost ovn_controller[153786]: 2025-11-23T10:04:07Z|00193|binding|INFO|Releasing lport fbcc667a-03be-4e7a-b7ea-70d45337df41 from this chassis (sb_readonly=0)
Nov 23 05:04:07 localhost nova_compute[281613]: 2025-11-23 10:04:07.380 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:07.660 2 INFO neutron.agent.securitygroups_rpc [None req-cd03e682-7688-4e93-ac2d-e601f5fc3971 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:08.160 2 INFO neutron.agent.securitygroups_rpc [None req-3d60f928-a89d-481c-a25d-e6417d1d55cf 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:08.736 2 INFO neutron.agent.securitygroups_rpc [None req-4c8f3bc2-c43f-4c06-bcd4-666015157129 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:08.949 2 INFO neutron.agent.securitygroups_rpc [None req-fdb6f567-90f0-41d5-acb8-83f08adab1b1 f436a64c9a134831a0f528309f399f1d 807b835f4cc944269d2f71f8e519b08a - - default default] Security group member updated ['c2582f3e-b285-4f13-ba8c-38a0c5b47d8d']#033[00m
Nov 23 05:04:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:09.271 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:09.271 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:09 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:09.331 2 INFO neutron.agent.securitygroups_rpc [None req-b9ca6263-bc29-4379-89bf-449c3fc12e0d 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:04:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:04:10 localhost nova_compute[281613]: 2025-11-23 10:04:10.140 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:10 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:10.163 2 INFO neutron.agent.securitygroups_rpc [None req-aba9c038-400a-4d01-8bf0-588461edf0a1 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:10 localhost podman[319470]: 2025-11-23 10:04:10.186156729 +0000 UTC m=+0.085381061 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS)
Nov 23 05:04:10 localhost podman[319470]: 2025-11-23 10:04:10.202930622 +0000 UTC m=+0.102154964 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 23 05:04:10 localhost podman[319471]: 2025-11-23 10:04:10.248039267 +0000 UTC m=+0.139924981 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:04:10 localhost podman[319471]: 2025-11-23 10:04:10.28818519 +0000 UTC m=+0.180070874 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:04:10 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:04:10 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:04:10 localhost podman[319469]: 2025-11-23 10:04:10.290374539 +0000 UTC m=+0.190039283 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7)
Nov 23 05:04:10 localhost podman[319469]: 2025-11-23 10:04:10.370902148 +0000 UTC m=+0.270566892 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 05:04:10 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:04:10 localhost nova_compute[281613]: 2025-11-23 10:04:10.703 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:10 localhost ovn_controller[153786]: 2025-11-23T10:04:10Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:05:71 10.100.0.10
Nov 23 05:04:10 localhost ovn_controller[153786]: 2025-11-23T10:04:10Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:05:71 10.100.0.10
Nov 23 05:04:11 localhost podman[240144]: time="2025-11-23T10:04:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:04:11 localhost podman[240144]: @ - - [23/Nov/2025:10:04:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159337 "" "Go-http-client/1.1"
Nov 23 05:04:11 localhost podman[240144]: @ - - [23/Nov/2025:10:04:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20163 "" "Go-http-client/1.1"
Nov 23 05:04:11 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:11.998 2 INFO neutron.agent.securitygroups_rpc [None req-b1d88626-831d-4bca-895f-9342c26bbcc2 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.021 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.143 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:15 localhost nova_compute[281613]: 2025-11-23 10:04:15.710 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.730 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.731 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.748 281617 DEBUG nova.objects.instance [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'flavor' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.784 281617 INFO nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Ignoring supplied device name: /dev/vdb#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.799 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.996 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.997 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:16 localhost nova_compute[281613]: 2025-11-23 10:04:16.997 281617 INFO nova.compute.manager [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Attaching volume e6dcaf3a-9826-4f4a-99ce-65fb52624d71 to /dev/vdb#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.139 281617 DEBUG os_brick.utils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.108', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005532586.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.141 281617 INFO oslo.privsep.daemon [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpgmrldg94/privsep.sock']#033[00m
Nov 23 05:04:17 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e164 e164: 6 total, 6 up, 6 in
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.810 281617 INFO oslo.privsep.daemon [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.695 319535 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.702 319535 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.708 319535 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.708 319535 INFO oslo.privsep.daemon [-] privsep daemon running as pid 319535#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.814 319535 DEBUG oslo.privsep.daemon [-] privsep: reply[cf06a993-7099-4164-be56-b45f17e0f94b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.911 319535 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.922 319535 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.922 319535 DEBUG oslo.privsep.daemon [-] privsep: reply[c265dea3-84bf-49a4-9603-e19494d7ff53]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.924 319535 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.929 319535 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.930 319535 DEBUG oslo.privsep.daemon [-] privsep: reply[50bfbc39-7dd8-451d-9f38-8d58174d3563]: (4, ('InitiatorName=iqn.1994-05.com.redhat:2e1a82caa\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.932 319535 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.937 319535 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.937 319535 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadb7c1-bc29-4047-b45c-2068b59370bb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.939 319535 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a98834-1e16-4c8a-88fc-4e55d750b9fb]: (4, '94eff25b-7070-4dc8-8cfe-491426a98db3') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.939 281617 DEBUG oslo_concurrency.processutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.960 281617 DEBUG oslo_concurrency.processutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.963 281617 DEBUG os_brick.initiator.connectors.lightos [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.964 281617 DEBUG os_brick.initiator.connectors.lightos [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.965 281617 DEBUG os_brick.initiator.connectors.lightos [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:94eff25b-7070-4dc8-8cfe-491426a98db3 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.965 281617 DEBUG os_brick.utils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] <== get_connector_properties: return (825ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.108', 'host': 'np0005532586.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:2e1a82caa', 'do_local_attach': False, 'nvme_hostid': '94eff25b-7070-4dc8-8cfe-491426a98db3', 'system uuid': '94eff25b-7070-4dc8-8cfe-491426a98db3', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:94eff25b-7070-4dc8-8cfe-491426a98db3', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 23 05:04:17 localhost nova_compute[281613]: 2025-11-23 10:04:17.966 281617 DEBUG nova.virt.block_device [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating existing volume attachment record: ab06789a-87b2-4134-b3fc-2d8944f18c88 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.161 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.162 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquired lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.162 281617 DEBUG nova.network.neutron [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 23 05:04:18 localhost nova_compute[281613]: 2025-11-23 10:04:18.162 281617 DEBUG nova.objects.instance [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.009 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.009 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.011 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.021 281617 DEBUG nova.objects.instance [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'flavor' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.048 281617 DEBUG nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Attempting to attach volume e6dcaf3a-9826-4f4a-99ce-65fb52624d71 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.051 281617 DEBUG nova.virt.libvirt.guest [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 23 05:04:19 localhost nova_compute[281613]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 23 05:04:19 localhost nova_compute[281613]:  <source protocol="rbd" name="volumes/volume-e6dcaf3a-9826-4f4a-99ce-65fb52624d71">
Nov 23 05:04:19 localhost nova_compute[281613]:    <host name="172.18.0.103" port="6789"/>
Nov 23 05:04:19 localhost nova_compute[281613]:    <host name="172.18.0.104" port="6789"/>
Nov 23 05:04:19 localhost nova_compute[281613]:    <host name="172.18.0.105" port="6789"/>
Nov 23 05:04:19 localhost nova_compute[281613]:  </source>
Nov 23 05:04:19 localhost nova_compute[281613]:  <auth username="openstack">
Nov 23 05:04:19 localhost nova_compute[281613]:    <secret type="ceph" uuid="46550e70-79cb-5f55-bf6d-1204b97e083b"/>
Nov 23 05:04:19 localhost nova_compute[281613]:  </auth>
Nov 23 05:04:19 localhost nova_compute[281613]:  <target dev="vdb" bus="virtio"/>
Nov 23 05:04:19 localhost nova_compute[281613]:  <serial>e6dcaf3a-9826-4f4a-99ce-65fb52624d71</serial>
Nov 23 05:04:19 localhost nova_compute[281613]: </disk>
Nov 23 05:04:19 localhost nova_compute[281613]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.196 281617 DEBUG nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.197 281617 DEBUG nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.198 281617 DEBUG nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.198 281617 DEBUG nova.virt.libvirt.driver [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] No VIF found with MAC fa:16:3e:5f:05:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 23 05:04:19 localhost nova_compute[281613]: 2025-11-23 10:04:19.335 281617 DEBUG oslo_concurrency.lockutils [None req-28c59090-9160-4a86-8e3d-64ca87e54766 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e165 e165: 6 total, 6 up, 6 in
Nov 23 05:04:19 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:19.672 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:9f:5c 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a81c63d1-c197-41eb-93f7-be983c9ed80d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b87e3e64-b6cc-4f08-95a6-de593e031494) old=Port_Binding(mac=['fa:16:3e:af:9f:5c 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:19 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:19.674 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b87e3e64-b6cc-4f08-95a6-de593e031494 in datapath accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e updated#033[00m
Nov 23 05:04:19 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:19.678 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network accb7a47-b6e0-4b0c-81bc-2cbf0d4d0f4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:19 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:19.679 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[497f4948-cd08-4636-9d6d-0c002a5eca7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.146 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:20 localhost podman[319564]: 2025-11-23 10:04:20.198612764 +0000 UTC m=+0.095524825 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 05:04:20 localhost podman[319566]: 2025-11-23 10:04:20.208296786 +0000 UTC m=+0.100232013 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:04:20 localhost podman[319566]: 2025-11-23 10:04:20.213605259 +0000 UTC m=+0.105540426 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:04:20 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:04:20 localhost podman[319564]: 2025-11-23 10:04:20.230831713 +0000 UTC m=+0.127743714 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.234 281617 DEBUG nova.network.neutron [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating instance_info_cache with network_info: [{"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 05:04:20 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.256 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Releasing lock "refresh_cache-0878698a-ffc9-486f-96bf-d5a905dca1b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.257 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.257 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.257 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.270 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.270 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.270 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.271 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.271 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:20 localhost podman[319567]: 2025-11-23 10:04:20.314258121 +0000 UTC m=+0.203986088 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:04:20 localhost podman[319567]: 2025-11-23 10:04:20.357856086 +0000 UTC m=+0.247584053 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:20 localhost podman[319565]: 2025-11-23 10:04:20.365707387 +0000 UTC m=+0.259283607 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:20 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:04:20 localhost podman[319565]: 2025-11-23 10:04:20.404959735 +0000 UTC m=+0.298535955 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 23 05:04:20 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.713 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:04:20 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3761541653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.766 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.815 281617 DEBUG nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.815 281617 DEBUG nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 05:04:20 localhost nova_compute[281613]: 2025-11-23 10:04:20.816 281617 DEBUG nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.015 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.016 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11354MB free_disk=41.70033645629883GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.016 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.017 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.115 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.115 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.115 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:04:21 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:21.132 2 INFO neutron.agent.securitygroups_rpc [None req-b505d753-a321-4285-8b8a-57c320b6a991 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.180 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:21 localhost systemd[1]: tmp-crun.57hOdc.mount: Deactivated successfully.
Nov 23 05:04:21 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:04:21 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3553475909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:04:21 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e166 e166: 6 total, 6 up, 6 in
Nov 23 05:04:21 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:04:21 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1006420118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.651 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.658 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.690 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.721 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:04:21 localhost nova_compute[281613]: 2025-11-23 10:04:21.722 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:22 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:22.084 2 INFO neutron.agent.securitygroups_rpc [None req-71e1d61a-9581-46c3-850d-9298f6399521 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: ERROR   10:04:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: ERROR   10:04:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: ERROR   10:04:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: ERROR   10:04:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: ERROR   10:04:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:04:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:04:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e167 e167: 6 total, 6 up, 6 in
Nov 23 05:04:23 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:23.440 2 INFO neutron.agent.securitygroups_rpc [None req-360628bd-6ea7-46e4-a35e-1747acc2d18c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:23 localhost nova_compute[281613]: 2025-11-23 10:04:23.483 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:23 localhost nova_compute[281613]: 2025-11-23 10:04:23.484 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:04:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e168 e168: 6 total, 6 up, 6 in
Nov 23 05:04:23 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:23.974 2 INFO neutron.agent.securitygroups_rpc [None req-53980d87-428d-4527-8575-9963b178026f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:25 localhost nova_compute[281613]: 2025-11-23 10:04:25.149 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:25 localhost nova_compute[281613]: 2025-11-23 10:04:25.717 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e169 e169: 6 total, 6 up, 6 in
Nov 23 05:04:26 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e170 e170: 6 total, 6 up, 6 in
Nov 23 05:04:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e171 e171: 6 total, 6 up, 6 in
Nov 23 05:04:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e172 e172: 6 total, 6 up, 6 in
Nov 23 05:04:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e173 e173: 6 total, 6 up, 6 in
Nov 23 05:04:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 23 05:04:30 localhost nova_compute[281613]: 2025-11-23 10:04:30.155 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:30 localhost nova_compute[281613]: 2025-11-23 10:04:30.721 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:31 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e174 e174: 6 total, 6 up, 6 in
Nov 23 05:04:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e175 e175: 6 total, 6 up, 6 in
Nov 23 05:04:32 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:32.605 2 INFO neutron.agent.securitygroups_rpc [None req-fa166c8d-7d85-4b6f-949c-c3bef6490854 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:04:32 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1119145413' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:04:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:04:32 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1119145413' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:04:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e176 e176: 6 total, 6 up, 6 in
Nov 23 05:04:33 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:33.341 2 INFO neutron.agent.securitygroups_rpc [None req-4ffb6c00-2d92-41b8-843d-89b3bf39eddb 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:34 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:34.049 262721 INFO neutron.agent.linux.ip_lib [None req-d4fce0bc-1add-478f-bd09-56a8f50ed49d - - - - - -] Device tapf5f1b1f5-6f cannot be used as it has no MAC address#033[00m
Nov 23 05:04:34 localhost nova_compute[281613]: 2025-11-23 10:04:34.076 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:34 localhost kernel: device tapf5f1b1f5-6f entered promiscuous mode
Nov 23 05:04:34 localhost NetworkManager[5990]: <info>  [1763892274.0850] manager: (tapf5f1b1f5-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Nov 23 05:04:34 localhost ovn_controller[153786]: 2025-11-23T10:04:34Z|00194|binding|INFO|Claiming lport f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 for this chassis.
Nov 23 05:04:34 localhost ovn_controller[153786]: 2025-11-23T10:04:34Z|00195|binding|INFO|f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1: Claiming unknown
Nov 23 05:04:34 localhost nova_compute[281613]: 2025-11-23 10:04:34.088 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:34 localhost systemd-udevd[319704]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:04:34 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:34.096 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-69259e0a-4a90-49b3-a536-da994949edf9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69259e0a-4a90-49b3-a536-da994949edf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b53736-2d9a-47bd-9e36-8e373431dd5c, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:34 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:34.098 159429 INFO neutron.agent.ovn.metadata.agent [-] Port f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 in datapath 69259e0a-4a90-49b3-a536-da994949edf9 bound to our chassis#033[00m
Nov 23 05:04:34 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:34.100 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 69259e0a-4a90-49b3-a536-da994949edf9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:04:34 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:34.101 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f680b743-4561-4b40-95e7-e16bf4e963a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost ovn_controller[153786]: 2025-11-23T10:04:34Z|00196|binding|INFO|Setting lport f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 ovn-installed in OVS
Nov 23 05:04:34 localhost ovn_controller[153786]: 2025-11-23T10:04:34Z|00197|binding|INFO|Setting lport f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 up in Southbound
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost nova_compute[281613]: 2025-11-23 10:04:34.128 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost journal[229736]: ethtool ioctl error on tapf5f1b1f5-6f: No such device
Nov 23 05:04:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e177 e177: 6 total, 6 up, 6 in
Nov 23 05:04:34 localhost nova_compute[281613]: 2025-11-23 10:04:34.204 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:34 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:34.385 2 INFO neutron.agent.securitygroups_rpc [None req-e699b8a4-5f06-487a-a300-fd9ee1a788a2 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:04:35 localhost nova_compute[281613]: 2025-11-23 10:04:35.158 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:35 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:35.160 2 INFO neutron.agent.securitygroups_rpc [None req-519a4a04-72f9-40b4-96af-e4987b6dbb80 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:35 localhost podman[319775]: 
Nov 23 05:04:35 localhost podman[319775]: 2025-11-23 10:04:35.193243875 +0000 UTC m=+0.106415059 container create 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0)
Nov 23 05:04:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e178 e178: 6 total, 6 up, 6 in
Nov 23 05:04:35 localhost podman[319775]: 2025-11-23 10:04:35.135482188 +0000 UTC m=+0.048653412 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:04:35 localhost systemd[1]: Started libpod-conmon-428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1.scope.
Nov 23 05:04:35 localhost systemd[1]: Started libcrun container.
Nov 23 05:04:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87b1c51ba5e939dbb1a6dd07adabf5ca598480c622950e5bceae7085737551b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:04:35 localhost podman[319775]: 2025-11-23 10:04:35.304268607 +0000 UTC m=+0.217439791 container init 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:04:35 localhost podman[319775]: 2025-11-23 10:04:35.316327132 +0000 UTC m=+0.229498326 container start 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 05:04:35 localhost dnsmasq[319793]: started, version 2.85 cachesize 150
Nov 23 05:04:35 localhost dnsmasq[319793]: DNS service limited to local subnets
Nov 23 05:04:35 localhost dnsmasq[319793]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:04:35 localhost dnsmasq[319793]: warning: no upstream servers configured
Nov 23 05:04:35 localhost dnsmasq-dhcp[319793]: DHCP, static leases only on 10.101.0.0, lease time 1d
Nov 23 05:04:35 localhost dnsmasq[319793]: read /var/lib/neutron/dhcp/69259e0a-4a90-49b3-a536-da994949edf9/addn_hosts - 0 addresses
Nov 23 05:04:35 localhost dnsmasq-dhcp[319793]: read /var/lib/neutron/dhcp/69259e0a-4a90-49b3-a536-da994949edf9/host
Nov 23 05:04:35 localhost dnsmasq-dhcp[319793]: read /var/lib/neutron/dhcp/69259e0a-4a90-49b3-a536-da994949edf9/opts
Nov 23 05:04:35 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:35.521 262721 INFO neutron.agent.dhcp.agent [None req-d28b1878-4e13-4d91-9299-f539be001b2f - - - - - -] DHCP configuration for ports {'433f7d15-78af-4473-8ae0-99fa84ac4c04'} is completed#033[00m
Nov 23 05:04:35 localhost nova_compute[281613]: 2025-11-23 10:04:35.727 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:36 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e179 e179: 6 total, 6 up, 6 in
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.609 281617 DEBUG oslo_concurrency.lockutils [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.610 281617 DEBUG oslo_concurrency.lockutils [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.628 281617 INFO nova.compute.manager [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Detaching volume e6dcaf3a-9826-4f4a-99ce-65fb52624d71#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.693 281617 INFO nova.virt.block_device [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Attempting to driver detach volume e6dcaf3a-9826-4f4a-99ce-65fb52624d71 from mountpoint /dev/vdb#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.705 281617 DEBUG nova.virt.libvirt.driver [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Attempting to detach device vdb from instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.706 281617 DEBUG nova.virt.libvirt.guest [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 23 05:04:36 localhost nova_compute[281613]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  <source protocol="rbd" name="volumes/volume-e6dcaf3a-9826-4f4a-99ce-65fb52624d71">
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.103" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.104" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.105" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  </source>
Nov 23 05:04:36 localhost nova_compute[281613]:  <target dev="vdb" bus="virtio"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  <serial>e6dcaf3a-9826-4f4a-99ce-65fb52624d71</serial>
Nov 23 05:04:36 localhost nova_compute[281613]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 23 05:04:36 localhost nova_compute[281613]: </disk>
Nov 23 05:04:36 localhost nova_compute[281613]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.716 281617 INFO nova.virt.libvirt.driver [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Successfully detached device vdb from instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 from the persistent domain config.#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.717 281617 DEBUG nova.virt.libvirt.driver [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.718 281617 DEBUG nova.virt.libvirt.guest [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 23 05:04:36 localhost nova_compute[281613]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  <source protocol="rbd" name="volumes/volume-e6dcaf3a-9826-4f4a-99ce-65fb52624d71">
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.103" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.104" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:    <host name="172.18.0.105" port="6789"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  </source>
Nov 23 05:04:36 localhost nova_compute[281613]:  <target dev="vdb" bus="virtio"/>
Nov 23 05:04:36 localhost nova_compute[281613]:  <serial>e6dcaf3a-9826-4f4a-99ce-65fb52624d71</serial>
Nov 23 05:04:36 localhost nova_compute[281613]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 23 05:04:36 localhost nova_compute[281613]: </disk>
Nov 23 05:04:36 localhost nova_compute[281613]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.836 281617 DEBUG nova.virt.libvirt.driver [None req-2e131dc5-43f5-4896-a350-fe1d53177050 - - - - - -] Received event <DeviceRemovedEvent: 1763892276.8358746, 0878698a-ffc9-486f-96bf-d5a905dca1b1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.839 281617 DEBUG nova.virt.libvirt.driver [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 23 05:04:36 localhost nova_compute[281613]: 2025-11-23 10:04:36.843 281617 INFO nova.virt.libvirt.driver [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Successfully detached device vdb from instance 0878698a-ffc9-486f-96bf-d5a905dca1b1 from the live domain config.#033[00m
Nov 23 05:04:37 localhost nova_compute[281613]: 2025-11-23 10:04:37.006 281617 DEBUG nova.objects.instance [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'flavor' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:04:37 localhost nova_compute[281613]: 2025-11-23 10:04:37.039 281617 DEBUG oslo_concurrency.lockutils [None req-6e08098b-c091-46a0-a024-06a27d4a223b 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:38 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e180 e180: 6 total, 6 up, 6 in
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.390 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.392 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.393 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.393 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.393 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.395 281617 INFO nova.compute.manager [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Terminating instance#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.397 281617 DEBUG nova.compute.manager [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 23 05:04:38 localhost kernel: device tapcde9c0d4-e6 left promiscuous mode
Nov 23 05:04:38 localhost NetworkManager[5990]: <info>  [1763892278.4732] device (tapcde9c0d4-e6): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.486 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost ovn_controller[153786]: 2025-11-23T10:04:38Z|00198|binding|INFO|Releasing lport cde9c0d4-e623-4543-a691-b11d78d0521b from this chassis (sb_readonly=0)
Nov 23 05:04:38 localhost ovn_controller[153786]: 2025-11-23T10:04:38Z|00199|binding|INFO|Setting lport cde9c0d4-e623-4543-a691-b11d78d0521b down in Southbound
Nov 23 05:04:38 localhost ovn_controller[153786]: 2025-11-23T10:04:38Z|00200|binding|INFO|Removing iface tapcde9c0d4-e6 ovn-installed in OVS
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.489 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.497 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:05:71 10.100.0.10'], port_security=['fa:16:3e:5f:05:71 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0878698a-ffc9-486f-96bf-d5a905dca1b1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49ebd7a691dd4ea59ffbe9f5703e77e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd77fc436-3ab1-42e0-a52b-861d18fcc237', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e956203-ccec-4e0d-b2cd-a19e87dc158b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=cde9c0d4-e623-4543-a691-b11d78d0521b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.499 159429 INFO neutron.agent.ovn.metadata.agent [-] Port cde9c0d4-e623-4543-a691-b11d78d0521b in datapath 27537d61-8ae5-47a8-b217-f913cbb83ef7 unbound from our chassis#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.503 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.502 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Port 537b5058-8e61-45ae-8f59-38dbe5e61006 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.504 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27537d61-8ae5-47a8-b217-f913cbb83ef7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.504 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[882e0cdd-a284-4b50-af44-47048a85433b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.506 159429 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7 namespace which is not needed anymore#033[00m
Nov 23 05:04:38 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 23 05:04:38 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d0000000b.scope: Consumed 15.092s CPU time.
Nov 23 05:04:38 localhost systemd-machined[203166]: Machine qemu-2-instance-0000000b terminated.
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.619 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.624 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.634 281617 INFO nova.virt.libvirt.driver [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Instance destroyed successfully.#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.635 281617 DEBUG nova.objects.instance [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lazy-loading 'resources' on Instance uuid 0878698a-ffc9-486f-96bf-d5a905dca1b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.641 281617 DEBUG nova.compute.manager [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-unplugged-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.645 281617 DEBUG oslo_concurrency.lockutils [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.646 281617 DEBUG oslo_concurrency.lockutils [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.647 281617 DEBUG oslo_concurrency.lockutils [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.647 281617 DEBUG nova.compute.manager [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] No waiting events found dispatching network-vif-unplugged-cde9c0d4-e623-4543-a691-b11d78d0521b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.647 281617 DEBUG nova.compute.manager [req-9c958d61-948c-4bb7-bc66-4616e22ef760 req-c2db2d5d-0e48-4663-888e-b51a476bc84f b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-unplugged-cde9c0d4-e623-4543-a691-b11d78d0521b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.659 281617 DEBUG nova.virt.libvirt.vif [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-23T10:03:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1770644392',display_name='tempest-VolumesBackupsTest-instance-1770644392',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005532586.localdomain',hostname='tempest-volumesbackupstest-instance-1770644392',id=11,image_ref='c5806483-57a8-4254-b41b-254b888c8606',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbMtM0rpJqnclcNLBgk4KSEiwJyDxhOOxIkPhgwzgS7K/A/uLo6bfQU35ro/p2iaEURDpm+5ppielVp65dcBV9RePqVaIQh7oGJXrXOd4KXkh5g6iCx7O9S9TqrSJ5aCA==',key_name='tempest-keypair-269458161',keypairs=<?>,launch_index=0,launched_at=2025-11-23T10:03:56Z,launched_on='np0005532586.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005532586.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='49ebd7a691dd4ea59ffbe9f5703e77e4',ramdisk_id='',reservation_id='r-ohu6ssq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c5806483-57a8-4254-b41b-254b888c8606',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1213662085',owner_user_name='tempest-VolumesBackupsTest-1213662085-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-23T10:03:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5f7e9736cbc74ce4ac3de51c4ac84504',uuid=0878698a-ffc9-486f-96bf-d5a905dca1b1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.660 281617 DEBUG nova.network.os_vif_util [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converting VIF {"id": "cde9c0d4-e623-4543-a691-b11d78d0521b", "address": "fa:16:3e:5f:05:71", "network": {"id": "27537d61-8ae5-47a8-b217-f913cbb83ef7", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1492741888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "49ebd7a691dd4ea59ffbe9f5703e77e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcde9c0d4-e6", "ovs_interfaceid": "cde9c0d4-e623-4543-a691-b11d78d0521b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.661 281617 DEBUG nova.network.os_vif_util [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.661 281617 DEBUG os_vif [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.663 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.664 281617 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcde9c0d4-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.667 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.670 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.674 281617 INFO os_vif [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:05:71,bridge_name='br-int',has_traffic_filtering=True,id=cde9c0d4-e623-4543-a691-b11d78d0521b,network=Network(27537d61-8ae5-47a8-b217-f913cbb83ef7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcde9c0d4-e6')#033[00m
Nov 23 05:04:38 localhost systemd[1]: tmp-crun.nBMy9R.mount: Deactivated successfully.
Nov 23 05:04:38 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [NOTICE]   (319358) : haproxy version is 2.8.14-c23fe91
Nov 23 05:04:38 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [NOTICE]   (319358) : path to executable is /usr/sbin/haproxy
Nov 23 05:04:38 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [ALERT]    (319358) : Current worker (319360) exited with code 143 (Terminated)
Nov 23 05:04:38 localhost neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7[319354]: [WARNING]  (319358) : All workers exited. Exiting... (0)
Nov 23 05:04:38 localhost systemd[1]: libpod-6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c.scope: Deactivated successfully.
Nov 23 05:04:38 localhost podman[319829]: 2025-11-23 10:04:38.743304656 +0000 UTC m=+0.089671008 container died 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 23 05:04:38 localhost systemd[1]: tmp-crun.xvWost.mount: Deactivated successfully.
Nov 23 05:04:38 localhost podman[319829]: 2025-11-23 10:04:38.79057981 +0000 UTC m=+0.136946162 container cleanup 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 05:04:38 localhost podman[319860]: 2025-11-23 10:04:38.819810118 +0000 UTC m=+0.061621092 container cleanup 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:38 localhost systemd[1]: libpod-conmon-6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c.scope: Deactivated successfully.
Nov 23 05:04:38 localhost podman[319874]: 2025-11-23 10:04:38.887621035 +0000 UTC m=+0.076270586 container remove 6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.895 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[967870c4-e0c5-4504-be5b-0e8664c753a3]: (4, ('Sun Nov 23 10:04:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7 (6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c)\n6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c\nSun Nov 23 10:04:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7 (6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c)\n6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.900 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f856322e-8ad9-43bb-b4fe-dd0959a3340e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.902 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27537d61-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.905 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost kernel: device tap27537d61-80 left promiscuous mode
Nov 23 05:04:38 localhost nova_compute[281613]: 2025-11-23 10:04:38.915 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.920 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffca2a0-dddc-419f-a4d1-2151d349de07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.942 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[916f21aa-ba77-4b1c-aa20-2f65b1cf7ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.944 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e648ea9e-85cc-4aa7-8119-0e7c255290f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.965 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1b1be-20fc-4870-a06b-e14112434289]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1217930, 'reachable_time': 24940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319893, 'error': None, 'target': 'ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.970 159535 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-27537d61-8ae5-47a8-b217-f913cbb83ef7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 23 05:04:38 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:38.970 159535 DEBUG oslo.privsep.daemon [-] privsep: reply[047fc8c0-bdbc-4cbd-b9a4-1e4c0d645083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.391 281617 INFO nova.virt.libvirt.driver [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Deleting instance files /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1_del#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.392 281617 INFO nova.virt.libvirt.driver [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Deletion of /var/lib/nova/instances/0878698a-ffc9-486f-96bf-d5a905dca1b1_del complete#033[00m
Nov 23 05:04:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e181 e181: 6 total, 6 up, 6 in
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.446 281617 DEBUG nova.virt.libvirt.host [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.447 281617 INFO nova.virt.libvirt.host [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] UEFI support detected#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.452 281617 INFO nova.compute.manager [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.452 281617 DEBUG oslo.service.loopingcall [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.453 281617 DEBUG nova.compute.manager [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 23 05:04:39 localhost nova_compute[281613]: 2025-11-23 10:04:39.454 281617 DEBUG nova.network.neutron [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 23 05:04:39 localhost systemd[1]: var-lib-containers-storage-overlay-0bac47e1294208475b1d0f19cd49cdebfa567aa05ed938c6e779060efdac377a-merged.mount: Deactivated successfully.
Nov 23 05:04:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a22bcc8bbb5c62458288f42c93f2401f2d94aad32b20f26e55655e986c3884c-userdata-shm.mount: Deactivated successfully.
Nov 23 05:04:39 localhost systemd[1]: run-netns-ovnmeta\x2d27537d61\x2d8ae5\x2d47a8\x2db217\x2df913cbb83ef7.mount: Deactivated successfully.
Nov 23 05:04:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:04:40 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:40.208 2 INFO neutron.agent.securitygroups_rpc [req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a req-8f09e20d-9c7b-422f-bb6f-02a4d95509a5 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Security group member updated ['d77fc436-3ab1-42e0-a52b-861d18fcc237']#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.327 281617 DEBUG nova.network.neutron [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.354 281617 INFO nova.compute.manager [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.433 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.434 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:40 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 1 addresses
Nov 23 05:04:40 localhost podman[319912]: 2025-11-23 10:04:40.479868744 +0000 UTC m=+0.066278446 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:04:40 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:04:40 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.530 281617 DEBUG oslo_concurrency.processutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:04:40 localhost systemd[1]: tmp-crun.EucxBq.mount: Deactivated successfully.
Nov 23 05:04:40 localhost podman[319925]: 2025-11-23 10:04:40.615637433 +0000 UTC m=+0.096885072 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350)
Nov 23 05:04:40 localhost podman[319925]: 2025-11-23 10:04:40.656835933 +0000 UTC m=+0.138083572 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter)
Nov 23 05:04:40 localhost podman[319926]: 2025-11-23 10:04:40.673384539 +0000 UTC m=+0.154729771 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 23 05:04:40 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:04:40 localhost podman[319926]: 2025-11-23 10:04:40.690930413 +0000 UTC m=+0.172275695 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.694 281617 DEBUG nova.compute.manager [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.695 281617 DEBUG oslo_concurrency.lockutils [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Acquiring lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.696 281617 DEBUG oslo_concurrency.lockutils [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.696 281617 DEBUG oslo_concurrency.lockutils [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.696 281617 DEBUG nova.compute.manager [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] No waiting events found dispatching network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.697 281617 WARNING nova.compute.manager [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received unexpected event network-vif-plugged-cde9c0d4-e623-4543-a691-b11d78d0521b for instance with vm_state deleted and task_state None.#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.697 281617 DEBUG nova.compute.manager [req-050c5b61-f8e2-41e9-a99f-c040df0059ac req-4e554167-e2e3-49de-a06b-240926b52b82 b7661bc5cba943dea266498398ed28cc 758f3043280349e086a85b86f2668848 - - default default] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Received event network-vif-deleted-cde9c0d4-e623-4543-a691-b11d78d0521b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 23 05:04:40 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.730 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:40 localhost systemd[1]: tmp-crun.QJJqf6.mount: Deactivated successfully.
Nov 23 05:04:40 localhost podman[319927]: 2025-11-23 10:04:40.781597286 +0000 UTC m=+0.261089587 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:04:40 localhost podman[319927]: 2025-11-23 10:04:40.817021891 +0000 UTC m=+0.296514082 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:04:40 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:04:40 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:04:40 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2673424660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.975 281617 DEBUG oslo_concurrency.processutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:04:40 localhost nova_compute[281613]: 2025-11-23 10:04:40.983 281617 DEBUG nova.compute.provider_tree [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:04:41 localhost nova_compute[281613]: 2025-11-23 10:04:41.014 281617 DEBUG nova.scheduler.client.report [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:04:41 localhost nova_compute[281613]: 2025-11-23 10:04:41.046 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:41 localhost nova_compute[281613]: 2025-11-23 10:04:41.082 281617 INFO nova.scheduler.client.report [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Deleted allocations for instance 0878698a-ffc9-486f-96bf-d5a905dca1b1#033[00m
Nov 23 05:04:41 localhost nova_compute[281613]: 2025-11-23 10:04:41.176 281617 DEBUG oslo_concurrency.lockutils [None req-dd0696f0-72c9-4ee8-8c44-5e6e5f5d0a7a 5f7e9736cbc74ce4ac3de51c4ac84504 49ebd7a691dd4ea59ffbe9f5703e77e4 - - default default] Lock "0878698a-ffc9-486f-96bf-d5a905dca1b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:04:41 localhost podman[240144]: time="2025-11-23T10:04:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:04:41 localhost podman[240144]: @ - - [23/Nov/2025:10:04:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159974 "" "Go-http-client/1.1"
Nov 23 05:04:41 localhost podman[240144]: @ - - [23/Nov/2025:10:04:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20173 "" "Go-http-client/1.1"
Nov 23 05:04:41 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e182 e182: 6 total, 6 up, 6 in
Nov 23 05:04:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:04:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:04:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e183 e183: 6 total, 6 up, 6 in
Nov 23 05:04:43 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 23 05:04:43 localhost dnsmasq[319793]: exiting on receipt of SIGTERM
Nov 23 05:04:43 localhost podman[320116]: 2025-11-23 10:04:43.514509134 +0000 UTC m=+0.071477856 container kill 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:04:43 localhost systemd[1]: libpod-428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1.scope: Deactivated successfully.
Nov 23 05:04:43 localhost podman[320129]: 2025-11-23 10:04:43.590200474 +0000 UTC m=+0.058164827 container died 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:04:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1-userdata-shm.mount: Deactivated successfully.
Nov 23 05:04:43 localhost podman[320129]: 2025-11-23 10:04:43.625231879 +0000 UTC m=+0.093196192 container cleanup 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 23 05:04:43 localhost systemd[1]: libpod-conmon-428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1.scope: Deactivated successfully.
Nov 23 05:04:43 localhost podman[320131]: 2025-11-23 10:04:43.667125128 +0000 UTC m=+0.127222189 container remove 428f9fd8a0212184f8fff9e95328ef047c73f1475cc017f5f1a2019461ae5df1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-69259e0a-4a90-49b3-a536-da994949edf9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:43 localhost nova_compute[281613]: 2025-11-23 10:04:43.669 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:43 localhost nova_compute[281613]: 2025-11-23 10:04:43.680 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:43 localhost ovn_controller[153786]: 2025-11-23T10:04:43Z|00201|binding|INFO|Releasing lport f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 from this chassis (sb_readonly=0)
Nov 23 05:04:43 localhost ovn_controller[153786]: 2025-11-23T10:04:43Z|00202|binding|INFO|Setting lport f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 down in Southbound
Nov 23 05:04:43 localhost kernel: device tapf5f1b1f5-6f left promiscuous mode
Nov 23 05:04:43 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:43.689 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-69259e0a-4a90-49b3-a536-da994949edf9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69259e0a-4a90-49b3-a536-da994949edf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8633d61c76748a7a900f3c8cea84ef3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b53736-2d9a-47bd-9e36-8e373431dd5c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:43 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:43.691 159429 INFO neutron.agent.ovn.metadata.agent [-] Port f5f1b1f5-6fdd-4781-98cd-05ba80d5d2b1 in datapath 69259e0a-4a90-49b3-a536-da994949edf9 unbound from our chassis#033[00m
Nov 23 05:04:43 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:43.696 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69259e0a-4a90-49b3-a536-da994949edf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:43 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:43.697 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[cae14687-add6-4e1b-a364-1602180ff63a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:43 localhost nova_compute[281613]: 2025-11-23 10:04:43.702 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:43.724 262721 INFO neutron.agent.dhcp.agent [None req-4deb3adc-0d51-4764-9e5c-cb277953d9de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:04:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:44.446 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:44 localhost systemd[1]: var-lib-containers-storage-overlay-87b1c51ba5e939dbb1a6dd07adabf5ca598480c622950e5bceae7085737551b3-merged.mount: Deactivated successfully.
Nov 23 05:04:44 localhost systemd[1]: run-netns-qdhcp\x2d69259e0a\x2d4a90\x2d49b3\x2da536\x2dda994949edf9.mount: Deactivated successfully.
Nov 23 05:04:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:04:44 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/548781773' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:04:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:04:44 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/548781773' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:04:44 localhost nova_compute[281613]: 2025-11-23 10:04:44.872 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:04:44 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:44.976 2 INFO neutron.agent.securitygroups_rpc [None req-f2af9951-cbc0-4ee3-8964-70390f4dbee5 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:45 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e184 e184: 6 total, 6 up, 6 in
Nov 23 05:04:45 localhost nova_compute[281613]: 2025-11-23 10:04:45.732 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:45.951 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:45 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:45.953 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:04:45 localhost nova_compute[281613]: 2025-11-23 10:04:45.988 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:46 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e185 e185: 6 total, 6 up, 6 in
Nov 23 05:04:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e186 e186: 6 total, 6 up, 6 in
Nov 23 05:04:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e187 e187: 6 total, 6 up, 6 in
Nov 23 05:04:48 localhost nova_compute[281613]: 2025-11-23 10:04:48.674 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:49 localhost systemd[1]: tmp-crun.hPytsB.mount: Deactivated successfully.
Nov 23 05:04:49 localhost podman[320173]: 2025-11-23 10:04:49.794585726 +0000 UTC m=+0.075060634 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 05:04:49 localhost dnsmasq[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/addn_hosts - 0 addresses
Nov 23 05:04:49 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/host
Nov 23 05:04:49 localhost dnsmasq-dhcp[316867]: read /var/lib/neutron/dhcp/27537d61-8ae5-47a8-b217-f913cbb83ef7/opts
Nov 23 05:04:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:04:49 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:49.955 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:04:50 localhost nova_compute[281613]: 2025-11-23 10:04:50.030 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:50 localhost ovn_controller[153786]: 2025-11-23T10:04:50Z|00203|binding|INFO|Releasing lport 7b8175d3-23c5-4287-b150-ab741e319c50 from this chassis (sb_readonly=0)
Nov 23 05:04:50 localhost ovn_controller[153786]: 2025-11-23T10:04:50Z|00204|binding|INFO|Setting lport 7b8175d3-23c5-4287-b150-ab741e319c50 down in Southbound
Nov 23 05:04:50 localhost kernel: device tap7b8175d3-23 left promiscuous mode
Nov 23 05:04:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:50.040 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-27537d61-8ae5-47a8-b217-f913cbb83ef7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '49ebd7a691dd4ea59ffbe9f5703e77e4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e956203-ccec-4e0d-b2cd-a19e87dc158b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=7b8175d3-23c5-4287-b150-ab741e319c50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:50.042 159429 INFO neutron.agent.ovn.metadata.agent [-] Port 7b8175d3-23c5-4287-b150-ab741e319c50 in datapath 27537d61-8ae5-47a8-b217-f913cbb83ef7 unbound from our chassis#033[00m
Nov 23 05:04:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:50.044 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 27537d61-8ae5-47a8-b217-f913cbb83ef7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:50 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:50.046 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[fb83b0aa-8695-43ed-8f8a-a66115a00318]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:50 localhost nova_compute[281613]: 2025-11-23 10:04:50.056 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:50 localhost nova_compute[281613]: 2025-11-23 10:04:50.734 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:04:51 localhost podman[320198]: 2025-11-23 10:04:51.176733974 +0000 UTC m=+0.075061404 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:04:51 localhost podman[320198]: 2025-11-23 10:04:51.191255355 +0000 UTC m=+0.089582715 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:04:51 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:04:51 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:51.239 2 INFO neutron.agent.securitygroups_rpc [None req-9f10832a-94d7-4e63-bc11-33234c92ec82 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:04:51 localhost systemd[1]: tmp-crun.cNmHSI.mount: Deactivated successfully.
Nov 23 05:04:51 localhost podman[320197]: 2025-11-23 10:04:51.249971348 +0000 UTC m=+0.148680238 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 05:04:51 localhost podman[320199]: 2025-11-23 10:04:51.262009022 +0000 UTC m=+0.153140359 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:04:51 localhost podman[320196]: 2025-11-23 10:04:51.323297774 +0000 UTC m=+0.226581888 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:04:51 localhost podman[320197]: 2025-11-23 10:04:51.333079427 +0000 UTC m=+0.231788277 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:04:51 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:04:51 localhost podman[320196]: 2025-11-23 10:04:51.357946857 +0000 UTC m=+0.261230911 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 05:04:51 localhost podman[320199]: 2025-11-23 10:04:51.367165346 +0000 UTC m=+0.258296663 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:04:51 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:04:51 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: ERROR   10:04:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: ERROR   10:04:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: ERROR   10:04:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: ERROR   10:04:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: ERROR   10:04:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:04:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:04:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e188 e188: 6 total, 6 up, 6 in
Nov 23 05:04:53 localhost nova_compute[281613]: 2025-11-23 10:04:53.632 281617 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763892278.6299648, 0878698a-ffc9-486f-96bf-d5a905dca1b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 23 05:04:53 localhost nova_compute[281613]: 2025-11-23 10:04:53.632 281617 INFO nova.compute.manager [-] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] VM Stopped (Lifecycle Event)#033[00m
Nov 23 05:04:53 localhost nova_compute[281613]: 2025-11-23 10:04:53.677 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:53 localhost nova_compute[281613]: 2025-11-23 10:04:53.724 281617 DEBUG nova.compute.manager [None req-e1e68e06-e485-4419-8c9f-057a8272effb - - - - - -] [instance: 0878698a-ffc9-486f-96bf-d5a905dca1b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 23 05:04:53 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:04:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:04:53 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:04:53 localhost podman[320292]: 2025-11-23 10:04:53.951022238 +0000 UTC m=+0.076728419 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:04:53 localhost nova_compute[281613]: 2025-11-23 10:04:53.954 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 05:04:54 localhost snmpd[67254]: empty variable list in _query
Nov 23 05:04:54 localhost dnsmasq[316867]: exiting on receipt of SIGTERM
Nov 23 05:04:54 localhost podman[320329]: 2025-11-23 10:04:54.557944034 +0000 UTC m=+0.064140480 container kill a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:54 localhost systemd[1]: libpod-a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b.scope: Deactivated successfully.
Nov 23 05:04:54 localhost podman[320343]: 2025-11-23 10:04:54.640028885 +0000 UTC m=+0.066893473 container died a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 23 05:04:54 localhost podman[320343]: 2025-11-23 10:04:54.677077664 +0000 UTC m=+0.103942222 container cleanup a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:04:54 localhost systemd[1]: libpod-conmon-a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b.scope: Deactivated successfully.
Nov 23 05:04:54 localhost podman[320345]: 2025-11-23 10:04:54.724402539 +0000 UTC m=+0.139925062 container remove a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-27537d61-8ae5-47a8-b217-f913cbb83ef7, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:04:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:04:54 localhost systemd[1]: var-lib-containers-storage-overlay-3e08911954a5790264d9775dfef45b3980716bd60dd773057dcf231639aca5d9-merged.mount: Deactivated successfully.
Nov 23 05:04:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0caf796492a6b5aa002ba196fcdbe9c7d5f9c4847916dde3423d4a646cd2a2b-userdata-shm.mount: Deactivated successfully.
Nov 23 05:04:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:55.007 262721 INFO neutron.agent.dhcp.agent [None req-122bfd3f-e68d-4e39-be52-b4821c112625 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:55 localhost systemd[1]: run-netns-qdhcp\x2d27537d61\x2d8ae5\x2d47a8\x2db217\x2df913cbb83ef7.mount: Deactivated successfully.
Nov 23 05:04:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:55.009 262721 INFO neutron.agent.dhcp.agent [None req-122bfd3f-e68d-4e39-be52-b4821c112625 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:55 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:55.098 262721 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:04:55 localhost nova_compute[281613]: 2025-11-23 10:04:55.739 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:56 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:56.555 2 INFO neutron.agent.securitygroups_rpc [None req-b2e8c328-efaf-49d2-9816-397d8d6e979c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['213f9d65-3629-4053-acee-7e99a128b417']#033[00m
Nov 23 05:04:57 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:57.766 2 INFO neutron.agent.securitygroups_rpc [None req-d7ea7df2-10a0-4360-bfff-447d012be880 6f11688a49fb4deba83327b1cf6539b4 02d402d01a514bbd8ec5543d8bb9b97c - - default default] Security group rule updated ['76c5df30-fcbd-4316-84a0-0d549c3af78d']#033[00m
Nov 23 05:04:57 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:57.971 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:04:57Z, description=, device_id=e37909af-961f-4dfd-8d68-199ed54a6cf8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cecd30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a7b160>], id=f536c583-cf47-4ad6-98d0-4f1c235878fb, ip_allocation=immediate, mac_address=fa:16:3e:bc:88:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2661, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:04:57Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:04:58 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:04:58 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:04:58 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:04:58 localhost podman[320389]: 2025-11-23 10:04:58.199127969 +0000 UTC m=+0.066117272 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 23 05:04:58 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:04:58.511 262721 INFO neutron.agent.dhcp.agent [None req-6678e229-a0a3-43df-848b-987c11268886 - - - - - -] DHCP configuration for ports {'f536c583-cf47-4ad6-98d0-4f1c235878fb'} is completed#033[00m
Nov 23 05:04:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:58.578 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5a5bf10-b94f-4270-9b8b-f5b33fff78ea) old=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:04:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:58.580 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5a5bf10-b94f-4270-9b8b-f5b33fff78ea in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe updated#033[00m
Nov 23 05:04:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:58.582 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:04:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:04:58.583 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[e62c38ed-2369-4110-8958-3a3d35957813]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:04:58 localhost nova_compute[281613]: 2025-11-23 10:04:58.680 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:58 localhost nova_compute[281613]: 2025-11-23 10:04:58.842 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:04:59 localhost neutron_sriov_agent[255613]: 2025-11-23 10:04:59.301 2 INFO neutron.agent.securitygroups_rpc [None req-2900c02d-4bae-4668-a3d0-a31f6942bf81 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['213f9d65-3629-4053-acee-7e99a128b417', '05c9de82-0c74-49cb-8524-43dd3dd47f37']#033[00m
Nov 23 05:04:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:05:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/48377154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:05:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:05:00 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:00.273 2 INFO neutron.agent.securitygroups_rpc [None req-31700e8d-00a6-42b0-834e-1388eab5f28c 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['05c9de82-0c74-49cb-8524-43dd3dd47f37']#033[00m
Nov 23 05:05:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/48377154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:05:00 localhost nova_compute[281613]: 2025-11-23 10:05:00.741 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:02 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:02.798 2 INFO neutron.agent.securitygroups_rpc [None req-2a25adbb-406f-4488-9290-d86f8fa25b90 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['83b9eb37-6d54-417f-b8aa-c3bd6525a15a']#033[00m
Nov 23 05:05:03 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:03.106 2 INFO neutron.agent.securitygroups_rpc [None req-48af5a2d-b1ea-400e-a467-c239dff497de 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['83b9eb37-6d54-417f-b8aa-c3bd6525a15a']#033[00m
Nov 23 05:05:03 localhost nova_compute[281613]: 2025-11-23 10:05:03.684 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:03 localhost nova_compute[281613]: 2025-11-23 10:05:03.859 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:04 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:04.268 2 INFO neutron.agent.securitygroups_rpc [None req-068f4094-be4e-499c-ac72-326e6af4f870 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:04 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:04.533 2 INFO neutron.agent.securitygroups_rpc [None req-5e059f36-e08b-42c4-9c06-8e7d2c8a7a35 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:04 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:04.733 2 INFO neutron.agent.securitygroups_rpc [None req-4232445f-3fdb-4ab4-af76-a3c636901057 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:04 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:04.969 2 INFO neutron.agent.securitygroups_rpc [None req-bdf55031-0050-45a1-bc2b-6230ff544fa3 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:05 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:05.142 2 INFO neutron.agent.securitygroups_rpc [None req-efdb9984-7352-4eb4-bfb4-32226524bf47 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:05 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:05.480 2 INFO neutron.agent.securitygroups_rpc [None req-2d5ed31f-c175-4e95-8ecb-2dfb6c38fae5 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:05 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:05.536 2 INFO neutron.agent.securitygroups_rpc [None req-f12b045a-e1e3-435c-8190-d496fdcf5f2d 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['d1cc26af-765b-45fa-b447-8d13d7399069']#033[00m
Nov 23 05:05:05 localhost nova_compute[281613]: 2025-11-23 10:05:05.744 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:06.321 2 INFO neutron.agent.securitygroups_rpc [None req-b7a801b4-8c85-482c-af67-8b6642b94666 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:06 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e189 e189: 6 total, 6 up, 6 in
Nov 23 05:05:06 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:06.858 2 INFO neutron.agent.securitygroups_rpc [None req-0eca1fa9-2850-4efd-aa96-731301b95192 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:07.198 2 INFO neutron.agent.securitygroups_rpc [None req-b76074b6-0a53-43ed-86f9-a06ad1dd7bfb 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:07 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:07.575 2 INFO neutron.agent.securitygroups_rpc [None req-5025f2b5-3c8e-4a04-9825-a17ba040ab51 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['edf15f56-c99c-43ec-b56f-1cd94f59b2c7']#033[00m
Nov 23 05:05:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:07.788 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5143a1d3-f63b-452c-a57f-85c07d0974c0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5a5bf10-b94f-4270-9b8b-f5b33fff78ea) old=Port_Binding(mac=['fa:16:3e:bb:d7:84 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76e6f4ab-630a-4c73-a560-1e6a5fffbdbe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6eb850a1541d4942b249428ef6092e5e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:05:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:07.790 159429 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5a5bf10-b94f-4270-9b8b-f5b33fff78ea in datapath 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe updated#033[00m
Nov 23 05:05:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:07.792 159429 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76e6f4ab-630a-4c73-a560-1e6a5fffbdbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 23 05:05:07 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:07.793 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[f0aa8fc9-8f58-4c42-9d2e-097f895c767f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:05:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:08.546 2 INFO neutron.agent.securitygroups_rpc [None req-b418740f-9100-4b44-8438-3f54c0de85da 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['ec91c804-f6c3-4a65-9ba5-93d7c528c909']#033[00m
Nov 23 05:05:08 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:08.592 2 INFO neutron.agent.securitygroups_rpc [None req-d6be62a7-1074-4184-82e5-b6c7eb9c713f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['d1cc26af-765b-45fa-b447-8d13d7399069', 'b62406ca-1ad0-471f-83b3-a7b86cb40552', '5041c083-f562-4221-8bac-acacd7a21e13']#033[00m
Nov 23 05:05:08 localhost nova_compute[281613]: 2025-11-23 10:05:08.688 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:05:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1108105406' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:05:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:05:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1108105406' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:05:09 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:09.078 2 INFO neutron.agent.securitygroups_rpc [None req-7eb231ba-0d20-4837-8fea-3273f5df7e61 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['b62406ca-1ad0-471f-83b3-a7b86cb40552', '5041c083-f562-4221-8bac-acacd7a21e13']#033[00m
Nov 23 05:05:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:09.233 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:05:08Z, description=, device_id=da40d6c5-a255-43a6-9fbf-e2238a7bac71, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790acf640>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790acf940>], id=92d573f1-30cb-44da-9a84-aa7ad7232de5, ip_allocation=immediate, mac_address=fa:16:3e:e7:7f:6b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2750, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:05:09Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:05:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:09.271 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:05:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:05:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:05:09 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:05:09 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:05:09 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:05:09 localhost systemd[1]: tmp-crun.o2oqxM.mount: Deactivated successfully.
Nov 23 05:05:09 localhost podman[320427]: 2025-11-23 10:05:09.492511695 +0000 UTC m=+0.066855433 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:05:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:09.733 262721 INFO neutron.agent.dhcp.agent [None req-84dc42dd-621b-4976-9683-7dc4ebee5672 - - - - - -] DHCP configuration for ports {'92d573f1-30cb-44da-9a84-aa7ad7232de5'} is completed#033[00m
Nov 23 05:05:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:10 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:10.046 2 INFO neutron.agent.securitygroups_rpc [None req-dc86cddf-8096-4f55-8994-fc155127f219 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['5975f3c0-fffa-4893-9c5f-a50728456ba3']#033[00m
Nov 23 05:05:10 localhost nova_compute[281613]: 2025-11-23 10:05:10.125 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:05:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:05:10 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:10.384 2 INFO neutron.agent.securitygroups_rpc [None req-6f895a1f-dcef-4309-9299-e6c0da113106 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['5975f3c0-fffa-4893-9c5f-a50728456ba3']#033[00m
Nov 23 05:05:10 localhost nova_compute[281613]: 2025-11-23 10:05:10.750 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:05:11 localhost podman[320449]: 2025-11-23 10:05:11.181578543 +0000 UTC m=+0.084373035 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 05:05:11 localhost podman[320451]: 2025-11-23 10:05:11.226048902 +0000 UTC m=+0.122220666 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:05:11 localhost podman[320449]: 2025-11-23 10:05:11.246856172 +0000 UTC m=+0.149650704 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Nov 23 05:05:11 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:05:11 localhost podman[320451]: 2025-11-23 10:05:11.25940082 +0000 UTC m=+0.155572594 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:05:11 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:05:11 localhost podman[240144]: time="2025-11-23T10:05:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:05:11 localhost podman[240144]: @ - - [23/Nov/2025:10:05:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:05:11 localhost podman[240144]: @ - - [23/Nov/2025:10:05:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19223 "" "Go-http-client/1.1"
Nov 23 05:05:11 localhost podman[320450]: 2025-11-23 10:05:11.391576512 +0000 UTC m=+0.290888080 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Nov 23 05:05:11 localhost podman[320450]: 2025-11-23 10:05:11.43231282 +0000 UTC m=+0.331624368 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute)
Nov 23 05:05:11 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:05:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:12.269 2 INFO neutron.agent.securitygroups_rpc [None req-9a1758e4-5d44-475c-9640-9981332a110e 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['2786fa44-4779-49f0-84bb-2a9d4bed5cef']#033[00m
Nov 23 05:05:12 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e190 e190: 6 total, 6 up, 6 in
Nov 23 05:05:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:12.596 2 INFO neutron.agent.securitygroups_rpc [None req-52408a28-3173-4dbe-afae-862affbfdc2f 9fd6b9c1c244436ba8d5c98a9fcfa9c5 6eb850a1541d4942b249428ef6092e5e - - default default] Security group member updated ['cadd5356-9a8d-419a-ac04-589c2522a695']#033[00m
Nov 23 05:05:12 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 23 05:05:12 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:12.790 2 INFO neutron.agent.securitygroups_rpc [None req-4dab38c7-a234-4065-b71c-d9440ee4c0cc 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['2786fa44-4779-49f0-84bb-2a9d4bed5cef']#033[00m
Nov 23 05:05:13 localhost nova_compute[281613]: 2025-11-23 10:05:13.692 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:14.050 2 INFO neutron.agent.securitygroups_rpc [None req-01487171-b358-4961-aa9c-b003fa4396a5 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:14 localhost nova_compute[281613]: 2025-11-23 10:05:14.122 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:14.290 2 INFO neutron.agent.securitygroups_rpc [None req-9dfbfcf8-cd5f-4ab2-ad68-710c1a723a6d 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:14.492 2 INFO neutron.agent.securitygroups_rpc [None req-85d9c75a-2523-4c6c-82af-38821b506d6b 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:14.735 2 INFO neutron.agent.securitygroups_rpc [None req-82a95b4d-7b43-4234-8627-52e2e708ade0 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:14 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:14.934 2 INFO neutron.agent.securitygroups_rpc [None req-b4af07d0-a11f-4847-a586-dfc78999259e 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:15 localhost nova_compute[281613]: 2025-11-23 10:05:15.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:15 localhost nova_compute[281613]: 2025-11-23 10:05:15.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:15 localhost nova_compute[281613]: 2025-11-23 10:05:15.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:05:15 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:15.137 2 INFO neutron.agent.securitygroups_rpc [None req-5c022b4a-ac2f-4704-91ea-0edb1d6cec16 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['d1c18a84-65e4-4a8f-9dbe-dec1719991b1']#033[00m
Nov 23 05:05:15 localhost nova_compute[281613]: 2025-11-23 10:05:15.749 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:15 localhost neutron_sriov_agent[255613]: 2025-11-23 10:05:15.771 2 INFO neutron.agent.securitygroups_rpc [None req-f781264c-3a54-454a-a5fe-8867df4ebfe6 131430ad8ac646268fcbca6e470c2ccc e604eae6a01f4975816ace0cec6e33f8 - - default default] Security group rule updated ['acd8c1db-c86a-40f9-91ab-30bd6f26d43e']#033[00m
Nov 23 05:05:16 localhost nova_compute[281613]: 2025-11-23 10:05:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:18 localhost nova_compute[281613]: 2025-11-23 10:05:18.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:18 localhost nova_compute[281613]: 2025-11-23 10:05:18.696 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.048 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.049 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.049 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.049 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.050 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:05:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:05:19 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/436952743' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.524 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.722 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.724 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11573MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.725 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.725 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.807 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.808 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:05:19 localhost nova_compute[281613]: 2025-11-23 10:05:19.826 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:05:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:05:20 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1820956695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.261 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.268 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.283 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.306 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.307 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:05:20 localhost nova_compute[281613]: 2025-11-23 10:05:20.755 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.308 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.326 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.327 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.327 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.344 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:05:21 localhost nova_compute[281613]: 2025-11-23 10:05:21.345 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:22 localhost nova_compute[281613]: 2025-11-23 10:05:22.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:05:22 localhost systemd[1]: tmp-crun.G09hd0.mount: Deactivated successfully.
Nov 23 05:05:22 localhost systemd[1]: tmp-crun.pZTISC.mount: Deactivated successfully.
Nov 23 05:05:22 localhost podman[320555]: 2025-11-23 10:05:22.184214021 +0000 UTC m=+0.082508004 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:05:22 localhost podman[320557]: 2025-11-23 10:05:22.15781194 +0000 UTC m=+0.060170243 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:05:22 localhost podman[320557]: 2025-11-23 10:05:22.241880676 +0000 UTC m=+0.144238979 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:05:22 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:05:22 localhost podman[320555]: 2025-11-23 10:05:22.266282994 +0000 UTC m=+0.164577017 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: ERROR   10:05:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: ERROR   10:05:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: ERROR   10:05:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: ERROR   10:05:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: ERROR   10:05:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:05:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:05:22 localhost podman[320556]: 2025-11-23 10:05:22.245203885 +0000 UTC m=+0.143133138 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 23 05:05:22 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:05:22 localhost podman[320556]: 2025-11-23 10:05:22.328916711 +0000 UTC m=+0.226845924 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:05:22 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:05:22 localhost podman[320558]: 2025-11-23 10:05:22.297041653 +0000 UTC m=+0.191599705 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:05:22 localhost podman[320558]: 2025-11-23 10:05:22.381059237 +0000 UTC m=+0.275617279 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:05:22 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:05:23 localhost nova_compute[281613]: 2025-11-23 10:05:23.016 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:05:23 localhost nova_compute[281613]: 2025-11-23 10:05:23.699 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:24 localhost sshd[320640]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:05:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e191 e191: 6 total, 6 up, 6 in
Nov 23 05:05:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:25 localhost nova_compute[281613]: 2025-11-23 10:05:25.760 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:25.863 262721 INFO neutron.agent.linux.ip_lib [None req-4309b7de-07e7-470e-93ba-05144cb0147c - - - - - -] Device tapebee1105-4f cannot be used as it has no MAC address#033[00m
Nov 23 05:05:25 localhost nova_compute[281613]: 2025-11-23 10:05:25.889 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:25 localhost kernel: device tapebee1105-4f entered promiscuous mode
Nov 23 05:05:25 localhost NetworkManager[5990]: <info>  [1763892325.8995] manager: (tapebee1105-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Nov 23 05:05:25 localhost nova_compute[281613]: 2025-11-23 10:05:25.900 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:25 localhost ovn_controller[153786]: 2025-11-23T10:05:25Z|00205|binding|INFO|Claiming lport ebee1105-4fef-4aa0-9090-5aa6f06d4a8a for this chassis.
Nov 23 05:05:25 localhost ovn_controller[153786]: 2025-11-23T10:05:25Z|00206|binding|INFO|ebee1105-4fef-4aa0-9090-5aa6f06d4a8a: Claiming unknown
Nov 23 05:05:25 localhost systemd-udevd[320652]: Network interface NamePolicy= disabled on kernel command line.
Nov 23 05:05:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:25.914 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-6e566d5c-7432-481a-87d6-197c0b6cb373', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e566d5c-7432-481a-87d6-197c0b6cb373', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2365f595-05fe-40bf-92d3-842b120c712e, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=ebee1105-4fef-4aa0-9090-5aa6f06d4a8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:05:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:25.916 159429 INFO neutron.agent.ovn.metadata.agent [-] Port ebee1105-4fef-4aa0-9090-5aa6f06d4a8a in datapath 6e566d5c-7432-481a-87d6-197c0b6cb373 bound to our chassis#033[00m
Nov 23 05:05:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:25.918 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6e566d5c-7432-481a-87d6-197c0b6cb373 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:05:25 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:25.919 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4ceccf-9688-4f90-b11b-9d1ac2067680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:05:25 localhost ovn_controller[153786]: 2025-11-23T10:05:25Z|00207|binding|INFO|Setting lport ebee1105-4fef-4aa0-9090-5aa6f06d4a8a ovn-installed in OVS
Nov 23 05:05:25 localhost ovn_controller[153786]: 2025-11-23T10:05:25Z|00208|binding|INFO|Setting lport ebee1105-4fef-4aa0-9090-5aa6f06d4a8a up in Southbound
Nov 23 05:05:25 localhost nova_compute[281613]: 2025-11-23 10:05:25.943 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:25 localhost nova_compute[281613]: 2025-11-23 10:05:25.980 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:26 localhost nova_compute[281613]: 2025-11-23 10:05:26.009 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:26 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e192 e192: 6 total, 6 up, 6 in
Nov 23 05:05:26 localhost podman[320708]: 
Nov 23 05:05:26 localhost podman[320708]: 2025-11-23 10:05:26.879222366 +0000 UTC m=+0.078046744 container create ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 23 05:05:26 localhost systemd[1]: Started libpod-conmon-ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d.scope.
Nov 23 05:05:26 localhost systemd[1]: Started libcrun container.
Nov 23 05:05:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996791f5c81d4765691bf0f0f840ece1462ea1b82a24dfe16ab5705303fc3537/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 23 05:05:26 localhost podman[320708]: 2025-11-23 10:05:26.838396286 +0000 UTC m=+0.037220724 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Nov 23 05:05:26 localhost podman[320708]: 2025-11-23 10:05:26.938270277 +0000 UTC m=+0.137094695 container init ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:05:26 localhost podman[320708]: 2025-11-23 10:05:26.943634982 +0000 UTC m=+0.142459370 container start ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:05:26 localhost dnsmasq[320726]: started, version 2.85 cachesize 150
Nov 23 05:05:26 localhost dnsmasq[320726]: DNS service limited to local subnets
Nov 23 05:05:26 localhost dnsmasq[320726]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Nov 23 05:05:26 localhost dnsmasq[320726]: warning: no upstream servers configured
Nov 23 05:05:26 localhost dnsmasq-dhcp[320726]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Nov 23 05:05:26 localhost dnsmasq[320726]: read /var/lib/neutron/dhcp/6e566d5c-7432-481a-87d6-197c0b6cb373/addn_hosts - 0 addresses
Nov 23 05:05:26 localhost dnsmasq-dhcp[320726]: read /var/lib/neutron/dhcp/6e566d5c-7432-481a-87d6-197c0b6cb373/host
Nov 23 05:05:26 localhost dnsmasq-dhcp[320726]: read /var/lib/neutron/dhcp/6e566d5c-7432-481a-87d6-197c0b6cb373/opts
Nov 23 05:05:27 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:27.055 262721 INFO neutron.agent.dhcp.agent [None req-d2604b66-1fb3-47e5-9421-b4b0c3fd0465 - - - - - -] DHCP configuration for ports {'dfe68d00-b85f-47ae-9bd8-fc236cebdbc6'} is completed#033[00m
Nov 23 05:05:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e193 e193: 6 total, 6 up, 6 in
Nov 23 05:05:28 localhost podman[320744]: 2025-11-23 10:05:28.147223137 +0000 UTC m=+0.074050137 container kill ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:05:28 localhost dnsmasq[320726]: exiting on receipt of SIGTERM
Nov 23 05:05:28 localhost systemd[1]: libpod-ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d.scope: Deactivated successfully.
Nov 23 05:05:28 localhost ovn_controller[153786]: 2025-11-23T10:05:28Z|00209|binding|INFO|Removing iface tapebee1105-4f ovn-installed in OVS
Nov 23 05:05:28 localhost ovn_controller[153786]: 2025-11-23T10:05:28Z|00210|binding|INFO|Removing lport ebee1105-4fef-4aa0-9090-5aa6f06d4a8a ovn-installed in OVS
Nov 23 05:05:28 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:28.187 159429 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 94392288-cd1a-46c5-a03a-c84039d0d9af with type ""#033[00m
Nov 23 05:05:28 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:28.189 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005532586.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp9233304d-bedd-59dd-b504-d2b43a60c820-6e566d5c-7432-481a-87d6-197c0b6cb373', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e566d5c-7432-481a-87d6-197c0b6cb373', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a088503b43e94251822e3c0e9006a74e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005532586.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2365f595-05fe-40bf-92d3-842b120c712e, chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e8e129a0>], logical_port=ebee1105-4fef-4aa0-9090-5aa6f06d4a8a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:05:28 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:28.190 159429 INFO neutron.agent.ovn.metadata.agent [-] Port ebee1105-4fef-4aa0-9090-5aa6f06d4a8a in datapath 6e566d5c-7432-481a-87d6-197c0b6cb373 unbound from our chassis#033[00m
Nov 23 05:05:28 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:28.192 159429 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6e566d5c-7432-481a-87d6-197c0b6cb373 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 23 05:05:28 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:28.193 262865 DEBUG oslo.privsep.daemon [-] privsep: reply[aecf62d4-21c5-45b6-98de-f28837b0fa70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.213 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.214 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost podman[320759]: 2025-11-23 10:05:28.235504126 +0000 UTC m=+0.067881571 container died ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:05:28 localhost podman[320759]: 2025-11-23 10:05:28.279481881 +0000 UTC m=+0.111859276 container cleanup ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 05:05:28 localhost systemd[1]: libpod-conmon-ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d.scope: Deactivated successfully.
Nov 23 05:05:28 localhost podman[320760]: 2025-11-23 10:05:28.326822327 +0000 UTC m=+0.155329408 container remove ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e566d5c-7432-481a-87d6-197c0b6cb373, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.340 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost kernel: device tapebee1105-4f left promiscuous mode
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.357 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:28.377 262721 INFO neutron.agent.dhcp.agent [None req-71c75928-ab1e-414e-b3bd-010c033bf00a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:05:28 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:05:28.377 262721 INFO neutron.agent.dhcp.agent [None req-71c75928-ab1e-414e-b3bd-010c033bf00a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.541 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost nova_compute[281613]: 2025-11-23 10:05:28.701 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:28 localhost systemd[1]: tmp-crun.yDngJF.mount: Deactivated successfully.
Nov 23 05:05:28 localhost systemd[1]: var-lib-containers-storage-overlay-996791f5c81d4765691bf0f0f840ece1462ea1b82a24dfe16ab5705303fc3537-merged.mount: Deactivated successfully.
Nov 23 05:05:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae635f2986a3dddd3407ceb858c3f4bab3f148f76e37cbb2ba2180adf25df74d-userdata-shm.mount: Deactivated successfully.
Nov 23 05:05:28 localhost systemd[1]: run-netns-qdhcp\x2d6e566d5c\x2d7432\x2d481a\x2d87d6\x2d197c0b6cb373.mount: Deactivated successfully.
Nov 23 05:05:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e194 e194: 6 total, 6 up, 6 in
Nov 23 05:05:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e195 e195: 6 total, 6 up, 6 in
Nov 23 05:05:30 localhost nova_compute[281613]: 2025-11-23 10:05:30.763 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:31 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:05:31 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:05:31 localhost podman[320806]: 2025-11-23 10:05:31.079085317 +0000 UTC m=+0.062218077 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:05:31 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:05:31 localhost nova_compute[281613]: 2025-11-23 10:05:31.253 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:32 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e196 e196: 6 total, 6 up, 6 in
Nov 23 05:05:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e197 e197: 6 total, 6 up, 6 in
Nov 23 05:05:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:05:33 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1150942741' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:05:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:05:33 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1150942741' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:05:33 localhost nova_compute[281613]: 2025-11-23 10:05:33.703 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e198 e198: 6 total, 6 up, 6 in
Nov 23 05:05:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e199 e199: 6 total, 6 up, 6 in
Nov 23 05:05:35 localhost systemd[1]: tmp-crun.tHT88Q.mount: Deactivated successfully.
Nov 23 05:05:35 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:05:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:05:35 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:05:35 localhost podman[320841]: 2025-11-23 10:05:35.281509519 +0000 UTC m=+0.079206556 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:05:35 localhost nova_compute[281613]: 2025-11-23 10:05:35.308 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:35 localhost nova_compute[281613]: 2025-11-23 10:05:35.767 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e200 e200: 6 total, 6 up, 6 in
Nov 23 05:05:38 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e201 e201: 6 total, 6 up, 6 in
Nov 23 05:05:38 localhost nova_compute[281613]: 2025-11-23 10:05:38.706 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e202 e202: 6 total, 6 up, 6 in
Nov 23 05:05:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:40 localhost nova_compute[281613]: 2025-11-23 10:05:40.769 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:41 localhost podman[240144]: time="2025-11-23T10:05:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:05:41 localhost podman[240144]: @ - - [23/Nov/2025:10:05:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:05:41 localhost podman[240144]: @ - - [23/Nov/2025:10:05:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1"
Nov 23 05:05:41 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e203 e203: 6 total, 6 up, 6 in
Nov 23 05:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:05:42 localhost podman[320864]: 2025-11-23 10:05:42.188056102 +0000 UTC m=+0.083730788 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 23 05:05:42 localhost podman[320864]: 2025-11-23 10:05:42.20024995 +0000 UTC m=+0.095924686 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 05:05:42 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:05:42 localhost podman[320866]: 2025-11-23 10:05:42.250655028 +0000 UTC m=+0.137243019 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:05:42 localhost podman[320866]: 2025-11-23 10:05:42.259109786 +0000 UTC m=+0.145697787 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:05:42 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:05:42 localhost systemd[1]: tmp-crun.qXzzQ0.mount: Deactivated successfully.
Nov 23 05:05:42 localhost podman[320865]: 2025-11-23 10:05:42.353572712 +0000 UTC m=+0.244249343 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 05:05:42 localhost podman[320865]: 2025-11-23 10:05:42.363714206 +0000 UTC m=+0.254390827 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:05:42 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:05:42 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e204 e204: 6 total, 6 up, 6 in
Nov 23 05:05:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e205 e205: 6 total, 6 up, 6 in
Nov 23 05:05:43 localhost systemd[1]: tmp-crun.eBBYpb.mount: Deactivated successfully.
Nov 23 05:05:43 localhost podman[321029]: 2025-11-23 10:05:43.286274968 +0000 UTC m=+0.104188029 container exec 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True)
Nov 23 05:05:43 localhost podman[321029]: 2025-11-23 10:05:43.382168132 +0000 UTC m=+0.200081173 container exec_died 0b1b648b4b1a631ab9e477926468073c793bcf111acd8abb7d2e3f46ae5511b9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-46550e70-79cb-5f55-bf6d-1204b97e083b-crash-np0005532586, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=553, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12)
Nov 23 05:05:43 localhost nova_compute[281613]: 2025-11-23 10:05:43.708 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e206 e206: 6 total, 6 up, 6 in
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:05:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:45 localhost nova_compute[281613]: 2025-11-23 10:05:45.771 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:46 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:46.105 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:05:46 localhost nova_compute[281613]: 2025-11-23 10:05:46.106 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:46 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:46.107 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:05:46 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e207 e207: 6 total, 6 up, 6 in
Nov 23 05:05:46 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532584.localdomain to 836.6M
Nov 23 05:05:46 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532585.localdomain to 836.6M
Nov 23 05:05:46 localhost ceph-mon[302802]: Adjusting osd_memory_target on np0005532586.localdomain to 836.6M
Nov 23 05:05:46 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532584.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 05:05:46 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532586.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Nov 23 05:05:46 localhost ceph-mon[302802]: Unable to set osd_memory_target on np0005532585.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Nov 23 05:05:46 localhost sshd[321235]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:05:47 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e208 e208: 6 total, 6 up, 6 in
Nov 23 05:05:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e209 e209: 6 total, 6 up, 6 in
Nov 23 05:05:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Nov 23 05:05:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:05:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:05:48 localhost nova_compute[281613]: 2025-11-23 10:05:48.711 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:49 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:05:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:50 localhost nova_compute[281613]: 2025-11-23 10:05:50.774 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:51 localhost ovn_metadata_agent[159423]: 2025-11-23 10:05:51.109 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:05:51 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80],prefix=session evict} (starting...)
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: ERROR   10:05:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: ERROR   10:05:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: ERROR   10:05:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: ERROR   10:05:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: ERROR   10:05:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:05:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:05:52 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Nov 23 05:05:52 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Nov 23 05:05:52 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 23 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:05:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e210 e210: 6 total, 6 up, 6 in
Nov 23 05:05:53 localhost podman[321239]: 2025-11-23 10:05:53.198311316 +0000 UTC m=+0.089959195 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:05:53 localhost podman[321239]: 2025-11-23 10:05:53.202602062 +0000 UTC m=+0.094250001 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:05:53 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:05:53 localhost systemd[1]: tmp-crun.PmpCR2.mount: Deactivated successfully.
Nov 23 05:05:53 localhost podman[321238]: 2025-11-23 10:05:53.250796691 +0000 UTC m=+0.145195164 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:05:53 localhost podman[321238]: 2025-11-23 10:05:53.263937885 +0000 UTC m=+0.158336338 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 23 05:05:53 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:05:53 localhost podman[321240]: 2025-11-23 10:05:53.31235342 +0000 UTC m=+0.201130352 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:05:53 localhost podman[321237]: 2025-11-23 10:05:53.355390479 +0000 UTC m=+0.251755876 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:05:53 localhost podman[321237]: 2025-11-23 10:05:53.384280908 +0000 UTC m=+0.280646355 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:05:53 localhost podman[321240]: 2025-11-23 10:05:53.377921886 +0000 UTC m=+0.266698858 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 23 05:05:53 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:05:53 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:05:53 localhost nova_compute[281613]: 2025-11-23 10:05:53.713 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e211 e211: 6 total, 6 up, 6 in
Nov 23 05:05:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:05:55 localhost nova_compute[281613]: 2025-11-23 10:05:55.777 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Nov 23 05:05:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:05:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80", "osd", "allow rw pool=manila_data namespace=fsvolumens_23db4718-314d-42ce-b54b-c4702b2fa362", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:05:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:05:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/561193803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:05:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:05:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/561193803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:05:58 localhost nova_compute[281613]: 2025-11-23 10:05:58.715 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:05:58 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80],prefix=session evict} (starting...)
Nov 23 05:05:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e212 e212: 6 total, 6 up, 6 in
Nov 23 05:05:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Nov 23 05:05:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Nov 23 05:05:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 23 05:05:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:00 localhost nova_compute[281613]: 2025-11-23 10:06:00.781 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e213 e213: 6 total, 6 up, 6 in
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.538010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363538048, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2879, "num_deletes": 276, "total_data_size": 5425586, "memory_usage": 5595936, "flush_reason": "Manual Compaction"}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363557067, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3552406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24310, "largest_seqno": 27184, "table_properties": {"data_size": 3540546, "index_size": 7725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27806, "raw_average_key_size": 22, "raw_value_size": 3515998, "raw_average_value_size": 2879, "num_data_blocks": 323, "num_entries": 1221, "num_filter_entries": 1221, "num_deletions": 276, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892236, "oldest_key_time": 1763892236, "file_creation_time": 1763892363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19106 microseconds, and 7926 cpu microseconds.
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.557115) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3552406 bytes OK
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.557140) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558831) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558854) EVENT_LOG_v1 {"time_micros": 1763892363558847, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.558877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 5412117, prev total WAL file size 5412117, number of live WAL files 2.
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.560352) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end)
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(3469KB)], [42(14MB)]
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363560406, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 19054920, "oldest_snapshot_seqno": -1}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13247 keys, 17876742 bytes, temperature: kUnknown
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363638592, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 17876742, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17801023, "index_size": 41506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33157, "raw_key_size": 356726, "raw_average_key_size": 26, "raw_value_size": 17575199, "raw_average_value_size": 1326, "num_data_blocks": 1553, "num_entries": 13247, "num_filter_entries": 13247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.638899) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17876742 bytes
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.640655) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.4 rd, 228.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 14.8 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(10.4) write-amplify(5.0) OK, records in: 13812, records dropped: 565 output_compression: NoCompression
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.640685) EVENT_LOG_v1 {"time_micros": 1763892363640672, "job": 24, "event": "compaction_finished", "compaction_time_micros": 78299, "compaction_time_cpu_micros": 46758, "output_level": 6, "num_output_files": 1, "total_output_size": 17876742, "num_input_records": 13812, "num_output_records": 13247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363641310, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892363643521, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.560225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:06:03.643719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:06:03 localhost nova_compute[281613]: 2025-11-23 10:06:03.717 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:03 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/23db4718-314d-42ce-b54b-c4702b2fa362/bb499d3a-704f-460c-a9a3-9c6909b5fa80],prefix=session evict} (starting...)
Nov 23 05:06:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Nov 23 05:06:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Nov 23 05:06:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 23 05:06:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:05 localhost nova_compute[281613]: 2025-11-23 10:06:05.784 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:06 localhost ovn_controller[153786]: 2025-11-23T10:06:06Z|00211|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 23 05:06:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e214 e214: 6 total, 6 up, 6 in
Nov 23 05:06:08 localhost nova_compute[281613]: 2025-11-23 10:06:08.719 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:06:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/725219692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:06:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:06:08 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/725219692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:06:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:09.271 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:06:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:06:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:06:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:10 localhost nova_compute[281613]: 2025-11-23 10:06:10.787 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:11 localhost podman[240144]: time="2025-11-23T10:06:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:06:11 localhost podman[240144]: @ - - [23/Nov/2025:10:06:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:06:11 localhost podman[240144]: @ - - [23/Nov/2025:10:06:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19215 "" "Go-http-client/1.1"
Nov 23 05:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:06:13 localhost systemd[1]: tmp-crun.GRnc8N.mount: Deactivated successfully.
Nov 23 05:06:13 localhost podman[321323]: 2025-11-23 10:06:13.233027068 +0000 UTC m=+0.137147587 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Nov 23 05:06:13 localhost podman[321323]: 2025-11-23 10:06:13.249899683 +0000 UTC m=+0.154020212 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public)
Nov 23 05:06:13 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:06:13 localhost podman[321325]: 2025-11-23 10:06:13.297316961 +0000 UTC m=+0.194403179 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:06:13 localhost podman[321324]: 2025-11-23 10:06:13.200501832 +0000 UTC m=+0.103657245 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:06:13 localhost podman[321325]: 2025-11-23 10:06:13.307949048 +0000 UTC m=+0.205035256 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:06:13 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:06:13 localhost podman[321324]: 2025-11-23 10:06:13.330813584 +0000 UTC m=+0.233968967 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Nov 23 05:06:13 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:06:13 localhost nova_compute[281613]: 2025-11-23 10:06:13.721 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:15 localhost nova_compute[281613]: 2025-11-23 10:06:15.790 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:16 localhost nova_compute[281613]: 2025-11-23 10:06:16.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:16 localhost nova_compute[281613]: 2025-11-23 10:06:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:16 localhost nova_compute[281613]: 2025-11-23 10:06:16.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:16 localhost nova_compute[281613]: 2025-11-23 10:06:16.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:06:18 localhost nova_compute[281613]: 2025-11-23 10:06:18.723 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:06:18 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/697156999' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:06:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:06:18 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/697156999' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:06:19 localhost nova_compute[281613]: 2025-11-23 10:06:19.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-710186636", "caps": ["mds", "allow rw path=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64", "osd", "allow rw pool=manila_data namespace=fsvolumens_b0303be3-5e23-424d-935b-a23f10085dfe", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-710186636", "format": "json"} : dispatch
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"} : dispatch
Nov 23 05:06:19 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-710186636"}]': finished
Nov 23 05:06:19 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-710186636,client_metadata.root=/volumes/_nogroup/b0303be3-5e23-424d-935b-a23f10085dfe/5ae76225-892a-444c-90ca-4662bf6a5b64],prefix=session evict} (starting...)
Nov 23 05:06:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e215 e215: 6 total, 6 up, 6 in
Nov 23 05:06:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e216 e216: 6 total, 6 up, 6 in
Nov 23 05:06:20 localhost nova_compute[281613]: 2025-11-23 10:06:20.796 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.037 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.037 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.038 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.054 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.054 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.054 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.055 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.055 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:06:21 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:06:21 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/276648477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.505 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.660 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.662 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11560MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.662 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.662 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.726 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.726 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:06:21 localhost nova_compute[281613]: 2025-11-23 10:06:21.766 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:06:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:06:22 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4196145382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:06:22 localhost nova_compute[281613]: 2025-11-23 10:06:22.216 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:06:22 localhost nova_compute[281613]: 2025-11-23 10:06:22.223 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:06:22 localhost nova_compute[281613]: 2025-11-23 10:06:22.252 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:06:22 localhost nova_compute[281613]: 2025-11-23 10:06:22.254 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:06:22 localhost nova_compute[281613]: 2025-11-23 10:06:22.255 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: ERROR   10:06:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: ERROR   10:06:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: ERROR   10:06:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: ERROR   10:06:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: ERROR   10:06:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:06:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:06:23 localhost nova_compute[281613]: 2025-11-23 10:06:23.236 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:23 localhost nova_compute[281613]: 2025-11-23 10:06:23.237 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:06:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e217 e217: 6 total, 6 up, 6 in
Nov 23 05:06:23 localhost nova_compute[281613]: 2025-11-23 10:06:23.725 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:06:24 localhost podman[321432]: 2025-11-23 10:06:24.188902108 +0000 UTC m=+0.090395577 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 23 05:06:24 localhost podman[321432]: 2025-11-23 10:06:24.223936681 +0000 UTC m=+0.125430180 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 23 05:06:24 localhost podman[321433]: 2025-11-23 10:06:24.238167165 +0000 UTC m=+0.134902976 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:06:24 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:06:24 localhost podman[321433]: 2025-11-23 10:06:24.273944869 +0000 UTC m=+0.170680660 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:06:24 localhost podman[321431]: 2025-11-23 10:06:24.284703589 +0000 UTC m=+0.188248573 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 23 05:06:24 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:06:24 localhost podman[321434]: 2025-11-23 10:06:24.35150172 +0000 UTC m=+0.244748446 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:06:24 localhost podman[321431]: 2025-11-23 10:06:24.36637331 +0000 UTC m=+0.269918274 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 23 05:06:24 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:06:24 localhost podman[321434]: 2025-11-23 10:06:24.441971547 +0000 UTC m=+0.335218293 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:06:24 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:06:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e218 e218: 6 total, 6 up, 6 in
Nov 23 05:06:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:06:24 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/351949762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:06:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:25 localhost systemd[1]: tmp-crun.B9d9jD.mount: Deactivated successfully.
Nov 23 05:06:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e219 e219: 6 total, 6 up, 6 in
Nov 23 05:06:25 localhost nova_compute[281613]: 2025-11-23 10:06:25.796 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e220 e220: 6 total, 6 up, 6 in
Nov 23 05:06:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e221 e221: 6 total, 6 up, 6 in
Nov 23 05:06:28 localhost nova_compute[281613]: 2025-11-23 10:06:28.729 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:06:29 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/280235995' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:06:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:06:29 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/280235995' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:06:29 localhost nova_compute[281613]: 2025-11-23 10:06:29.791 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:29 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:29.791 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:06:29 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:29.793 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:06:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:30 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:06:30.434 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:06:30Z, description=, device_id=48f6cfc5-8b79-494b-95ec-92da5372a95b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b247c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b248e0>], id=aadeb422-8dcf-4a91-b0b5-b39568fe89f4, ip_allocation=immediate, mac_address=fa:16:3e:c6:e5:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3210, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:06:30Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:06:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e222 e222: 6 total, 6 up, 6 in
Nov 23 05:06:30 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:06:30 localhost podman[321533]: 2025-11-23 10:06:30.652002712 +0000 UTC m=+0.062839364 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:06:30 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:06:30 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:06:30 localhost nova_compute[281613]: 2025-11-23 10:06:30.799 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:30 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:06:30.975 262721 INFO neutron.agent.dhcp.agent [None req-73f2c79e-4187-4d2c-a3fe-cf2a5ce6f154 - - - - - -] DHCP configuration for ports {'aadeb422-8dcf-4a91-b0b5-b39568fe89f4'} is completed#033[00m
Nov 23 05:06:31 localhost nova_compute[281613]: 2025-11-23 10:06:31.326 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:31 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e223 e223: 6 total, 6 up, 6 in
Nov 23 05:06:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:06:32.369 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:06:32Z, description=, device_id=330e0a53-cd4a-4c6e-a9ea-859155295b59, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a8c550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a8c730>], id=99e7296f-bc17-4017-bccc-5c8e3ac66d86, ip_allocation=immediate, mac_address=fa:16:3e:a8:6c:7e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3222, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:06:32Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:06:32 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:06:32 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:06:32 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:06:32 localhost podman[321570]: 2025-11-23 10:06:32.566108514 +0000 UTC m=+0.060717716 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 23 05:06:32 localhost ovn_metadata_agent[159423]: 2025-11-23 10:06:32.795 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:06:32 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:06:32.839 262721 INFO neutron.agent.dhcp.agent [None req-bd08aee2-7386-4594-b68c-ce80be75aa7d - - - - - -] DHCP configuration for ports {'99e7296f-bc17-4017-bccc-5c8e3ac66d86'} is completed#033[00m
Nov 23 05:06:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e224 e224: 6 total, 6 up, 6 in
Nov 23 05:06:33 localhost nova_compute[281613]: 2025-11-23 10:06:33.407 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 23 05:06:33 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3749090719' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Nov 23 05:06:33 localhost nova_compute[281613]: 2025-11-23 10:06:33.731 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e225 e225: 6 total, 6 up, 6 in
Nov 23 05:06:34 localhost nova_compute[281613]: 2025-11-23 10:06:34.533 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:35 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e226 e226: 6 total, 6 up, 6 in
Nov 23 05:06:35 localhost nova_compute[281613]: 2025-11-23 10:06:35.809 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:37 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e227 e227: 6 total, 6 up, 6 in
Nov 23 05:06:38 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e228 e228: 6 total, 6 up, 6 in
Nov 23 05:06:38 localhost nova_compute[281613]: 2025-11-23 10:06:38.691 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:38 localhost nova_compute[281613]: 2025-11-23 10:06:38.732 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e229 e229: 6 total, 6 up, 6 in
Nov 23 05:06:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:40 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e230 e230: 6 total, 6 up, 6 in
Nov 23 05:06:40 localhost nova_compute[281613]: 2025-11-23 10:06:40.812 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:41 localhost podman[240144]: time="2025-11-23T10:06:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:06:41 localhost podman[240144]: @ - - [23/Nov/2025:10:06:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:06:41 localhost podman[240144]: @ - - [23/Nov/2025:10:06:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19211 "" "Go-http-client/1.1"
Nov 23 05:06:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e231 e231: 6 total, 6 up, 6 in
Nov 23 05:06:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:06:43 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/620203384' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:06:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:06:43 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/620203384' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:06:43 localhost nova_compute[281613]: 2025-11-23 10:06:43.733 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:06:44 localhost podman[321593]: 2025-11-23 10:06:44.190038837 +0000 UTC m=+0.088753213 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal)
Nov 23 05:06:44 localhost systemd[1]: tmp-crun.tWd2m9.mount: Deactivated successfully.
Nov 23 05:06:44 localhost podman[321594]: 2025-11-23 10:06:44.259430346 +0000 UTC m=+0.152132540 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Nov 23 05:06:44 localhost podman[321595]: 2025-11-23 10:06:44.305251812 +0000 UTC m=+0.192091098 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:06:44 localhost podman[321594]: 2025-11-23 10:06:44.323505243 +0000 UTC m=+0.216207397 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible)
Nov 23 05:06:44 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:06:44 localhost podman[321595]: 2025-11-23 10:06:44.340768409 +0000 UTC m=+0.227607745 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:06:44 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:06:44 localhost podman[321593]: 2025-11-23 10:06:44.379487802 +0000 UTC m=+0.278202208 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 05:06:44 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:06:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:45 localhost nova_compute[281613]: 2025-11-23 10:06:45.816 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:47 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:06:47 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:48 localhost nova_compute[281613]: 2025-11-23 10:06:48.130 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:48 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:06:48 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:06:48 localhost podman[321816]: 2025-11-23 10:06:48.148619927 +0000 UTC m=+0.066489353 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:06:48 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:06:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e232 e232: 6 total, 6 up, 6 in
Nov 23 05:06:48 localhost nova_compute[281613]: 2025-11-23 10:06:48.735 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:49 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:06:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:50 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e233 e233: 6 total, 6 up, 6 in
Nov 23 05:06:50 localhost nova_compute[281613]: 2025-11-23 10:06:50.821 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:51 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e234 e234: 6 total, 6 up, 6 in
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: ERROR   10:06:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: ERROR   10:06:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: ERROR   10:06:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: ERROR   10:06:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: ERROR   10:06:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:06:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:06:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e235 e235: 6 total, 6 up, 6 in
Nov 23 05:06:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:06:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:06:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:06:53 localhost nova_compute[281613]: 2025-11-23 10:06:53.737 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e236 e236: 6 total, 6 up, 6 in
Nov 23 05:06:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:06:55 localhost podman[321838]: 2025-11-23 10:06:55.211121925 +0000 UTC m=+0.106135381 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:06:55 localhost podman[321838]: 2025-11-23 10:06:55.244612808 +0000 UTC m=+0.139626274 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:06:55 localhost podman[321837]: 2025-11-23 10:06:55.190578151 +0000 UTC m=+0.091367133 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 05:06:55 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:06:55 localhost podman[321839]: 2025-11-23 10:06:55.261812141 +0000 UTC m=+0.152989804 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 23 05:06:55 localhost podman[321839]: 2025-11-23 10:06:55.285780256 +0000 UTC m=+0.176957909 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:06:55 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:06:55 localhost podman[321836]: 2025-11-23 10:06:55.250180308 +0000 UTC m=+0.152953224 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 23 05:06:55 localhost podman[321837]: 2025-11-23 10:06:55.326849453 +0000 UTC m=+0.227638425 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 05:06:55 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:06:55 localhost podman[321836]: 2025-11-23 10:06:55.383225293 +0000 UTC m=+0.285998169 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 23 05:06:55 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:06:55 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e237 e237: 6 total, 6 up, 6 in
Nov 23 05:06:55 localhost nova_compute[281613]: 2025-11-23 10:06:55.826 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:56 localhost systemd[1]: tmp-crun.wH2n7D.mount: Deactivated successfully.
Nov 23 05:06:56 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:06:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:06:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:06:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:06:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e238 e238: 6 total, 6 up, 6 in
Nov 23 05:06:58 localhost nova_compute[281613]: 2025-11-23 10:06:58.739 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:06:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:00 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:00 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:00 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:00 localhost nova_compute[281613]: 2025-11-23 10:07:00.827 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e239 e239: 6 total, 6 up, 6 in
Nov 23 05:07:03 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:03 localhost nova_compute[281613]: 2025-11-23 10:07:03.740 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:07:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:07:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:05 localhost nova_compute[281613]: 2025-11-23 10:07:05.833 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:06 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e240 e240: 6 total, 6 up, 6 in
Nov 23 05:07:07 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e241 e241: 6 total, 6 up, 6 in
Nov 23 05:07:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.182882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428182961, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1727, "num_deletes": 271, "total_data_size": 2337469, "memory_usage": 2372392, "flush_reason": "Manual Compaction"}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428192281, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1529660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27189, "largest_seqno": 28911, "table_properties": {"data_size": 1522091, "index_size": 4398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 18108, "raw_average_key_size": 21, "raw_value_size": 1506165, "raw_average_value_size": 1821, "num_data_blocks": 185, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 271, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892363, "oldest_key_time": 1763892363, "file_creation_time": 1763892428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 9441 microseconds, and 4647 cpu microseconds.
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.192331) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1529660 bytes OK
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.192353) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.194099) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.194121) EVENT_LOG_v1 {"time_micros": 1763892428194115, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.194142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2328985, prev total WAL file size 2329309, number of live WAL files 2.
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.195516) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323736' seq:72057594037927935, type:22 .. '6C6F676D0034353330' seq:0, type:0; will stop at (end)
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1493KB)], [45(17MB)]
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428195768, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 19406402, "oldest_snapshot_seqno": -1}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13511 keys, 18794432 bytes, temperature: kUnknown
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428302912, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18794432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18715857, "index_size": 43722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33797, "raw_key_size": 364144, "raw_average_key_size": 26, "raw_value_size": 18484195, "raw_average_value_size": 1368, "num_data_blocks": 1631, "num_entries": 13511, "num_filter_entries": 13511, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.303341) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18794432 bytes
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.305039) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.0 rd, 175.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.0 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(25.0) write-amplify(12.3) OK, records in: 14074, records dropped: 563 output_compression: NoCompression
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.305064) EVENT_LOG_v1 {"time_micros": 1763892428305051, "job": 26, "event": "compaction_finished", "compaction_time_micros": 107223, "compaction_time_cpu_micros": 53273, "output_level": 6, "num_output_files": 1, "total_output_size": 18794432, "num_input_records": 14074, "num_output_records": 13511, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428305457, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892428307823, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.195072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.307873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.307881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.307885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.307888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:07:08.307891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:07:08 localhost nova_compute[281613]: 2025-11-23 10:07:08.742 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:08 localhost sshd[321920]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:07:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:09.272 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:07:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:09.274 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:07:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:09.274 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:07:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:07:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:07:10 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:07:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1905485083' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:07:10 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:07:10 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1905485083' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:07:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:07:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:07:10 localhost nova_compute[281613]: 2025-11-23 10:07:10.835 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:11 localhost podman[240144]: time="2025-11-23T10:07:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:07:11 localhost podman[240144]: @ - - [23/Nov/2025:10:07:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:07:11 localhost podman[240144]: @ - - [23/Nov/2025:10:07:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19217 "" "Go-http-client/1.1"
Nov 23 05:07:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e242 e242: 6 total, 6 up, 6 in
Nov 23 05:07:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:07:13 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3008702058' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:07:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:07:13 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3008702058' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:07:13 localhost nova_compute[281613]: 2025-11-23 10:07:13.777 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:07:15 localhost systemd[1]: tmp-crun.Oj6Zcd.mount: Deactivated successfully.
Nov 23 05:07:15 localhost podman[321922]: 2025-11-23 10:07:15.199485131 +0000 UTC m=+0.098511336 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Nov 23 05:07:15 localhost podman[321923]: 2025-11-23 10:07:15.253003023 +0000 UTC m=+0.154255228 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:07:15 localhost podman[321923]: 2025-11-23 10:07:15.297997606 +0000 UTC m=+0.199249781 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:07:15 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:07:15 localhost podman[321922]: 2025-11-23 10:07:15.318393125 +0000 UTC m=+0.217419310 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, name=ubi9-minimal, architecture=x86_64, release=1755695350, io.openshift.expose-services=)
Nov 23 05:07:15 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:07:15 localhost podman[321924]: 2025-11-23 10:07:15.30517964 +0000 UTC m=+0.199786216 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:07:15 localhost podman[321924]: 2025-11-23 10:07:15.388181236 +0000 UTC m=+0.282787812 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:07:15 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:07:15 localhost nova_compute[281613]: 2025-11-23 10:07:15.837 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:16 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:07:16.821 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:07:16Z, description=, device_id=27f60ba5-7556-40d8-99c6-791c5d4b29f0, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ae65b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a86370>], id=f261605c-3a36-499b-acea-5c02f1f6b5b2, ip_allocation=immediate, mac_address=fa:16:3e:1c:ed:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3366, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:07:16Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:07:17 localhost nova_compute[281613]: 2025-11-23 10:07:17.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:17 localhost nova_compute[281613]: 2025-11-23 10:07:17.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:17 localhost nova_compute[281613]: 2025-11-23 10:07:17.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:17 localhost nova_compute[281613]: 2025-11-23 10:07:17.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:07:17 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:17 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:07:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:07:17 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:07:17 localhost systemd[1]: tmp-crun.2uqi7g.mount: Deactivated successfully.
Nov 23 05:07:17 localhost podman[321998]: 2025-11-23 10:07:17.067138313 +0000 UTC m=+0.065057775 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:07:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:07:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:07:17 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:07:17.320 262721 INFO neutron.agent.dhcp.agent [None req-a7c05ca1-9b01-4c38-9de9-6710dd0d2755 - - - - - -] DHCP configuration for ports {'f261605c-3a36-499b-acea-5c02f1f6b5b2'} is completed#033[00m
Nov 23 05:07:18 localhost nova_compute[281613]: 2025-11-23 10:07:18.467 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:18 localhost nova_compute[281613]: 2025-11-23 10:07:18.778 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:19 localhost nova_compute[281613]: 2025-11-23 10:07:19.105 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:07:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:20 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e243 e243: 6 total, 6 up, 6 in
Nov 23 05:07:20 localhost nova_compute[281613]: 2025-11-23 10:07:20.842 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:21 localhost nova_compute[281613]: 2025-11-23 10:07:21.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:21 localhost nova_compute[281613]: 2025-11-23 10:07:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:07:21 localhost nova_compute[281613]: 2025-11-23 10:07:21.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:07:21 localhost nova_compute[281613]: 2025-11-23 10:07:21.043 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:07:21 localhost nova_compute[281613]: 2025-11-23 10:07:21.256 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.044 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.045 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.045 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.046 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.046 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: ERROR   10:07:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: ERROR   10:07:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: ERROR   10:07:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: ERROR   10:07:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: ERROR   10:07:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:07:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:07:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:07:22 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3208999179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.551 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:07:22 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e244 e244: 6 total, 6 up, 6 in
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.759 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.760 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11543MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.761 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.761 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.848 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.848 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:07:22 localhost nova_compute[281613]: 2025-11-23 10:07:22.882 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:07:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:07:23 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2705449898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.347 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.355 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.380 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.404 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.405 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:07:23 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:23 localhost nova_compute[281613]: 2025-11-23 10:07:23.814 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:07:23 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/563644150' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:07:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:07:23 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/563644150' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:07:24 localhost nova_compute[281613]: 2025-11-23 10:07:24.402 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:24 localhost nova_compute[281613]: 2025-11-23 10:07:24.403 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:24 localhost nova_compute[281613]: 2025-11-23 10:07:24.403 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:07:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:07:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:07:24 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:07:24.682 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:07:24Z, description=, device_id=911d76ef-8c35-4e07-a4bf-332c60f42359, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b809d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790ab7820>], id=bb3f7388-a00e-457c-9c96-50c2afec2f48, ip_allocation=immediate, mac_address=fa:16:3e:ab:e3:df, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3395, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:07:24Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:07:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:24 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 4 addresses
Nov 23 05:07:24 localhost podman[322080]: 2025-11-23 10:07:24.939942706 +0000 UTC m=+0.067758407 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:07:24 localhost systemd[1]: tmp-crun.oICss3.mount: Deactivated successfully.
Nov 23 05:07:24 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:07:24 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:07:25 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:07:25.213 262721 INFO neutron.agent.dhcp.agent [None req-7a6f9850-07f0-4613-a81e-5a2da8a91dd9 - - - - - -] DHCP configuration for ports {'bb3f7388-a00e-457c-9c96-50c2afec2f48'} is completed#033[00m
Nov 23 05:07:25 localhost nova_compute[281613]: 2025-11-23 10:07:25.843 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:26 localhost nova_compute[281613]: 2025-11-23 10:07:26.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:07:26 localhost nova_compute[281613]: 2025-11-23 10:07:26.143 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:26 localhost podman[322102]: 2025-11-23 10:07:26.190402484 +0000 UTC m=+0.088143366 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:07:26 localhost podman[322102]: 2025-11-23 10:07:26.205841861 +0000 UTC m=+0.103582743 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:07:26 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:07:26 localhost podman[322109]: 2025-11-23 10:07:26.256692661 +0000 UTC m=+0.144961438 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller)
Nov 23 05:07:26 localhost systemd[1]: tmp-crun.gARW4I.mount: Deactivated successfully.
Nov 23 05:07:26 localhost podman[322103]: 2025-11-23 10:07:26.267937794 +0000 UTC m=+0.161428062 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:07:26 localhost podman[322103]: 2025-11-23 10:07:26.307041288 +0000 UTC m=+0.200531556 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:07:26 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:07:26 localhost podman[322109]: 2025-11-23 10:07:26.323261905 +0000 UTC m=+0.211530662 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:07:26 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:07:26 localhost podman[322101]: 2025-11-23 10:07:26.396110518 +0000 UTC m=+0.300224211 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:07:26 localhost podman[322101]: 2025-11-23 10:07:26.400988149 +0000 UTC m=+0.305101882 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 23 05:07:26 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:07:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:07:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e245 e245: 6 total, 6 up, 6 in
Nov 23 05:07:28 localhost nova_compute[281613]: 2025-11-23 10:07:28.816 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:29 localhost systemd[1]: tmp-crun.1QX0mm.mount: Deactivated successfully.
Nov 23 05:07:29 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:07:29 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:07:29 localhost podman[322198]: 2025-11-23 10:07:29.03337818 +0000 UTC m=+0.082825874 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:07:29 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:07:29 localhost nova_compute[281613]: 2025-11-23 10:07:29.082 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:29 localhost nova_compute[281613]: 2025-11-23 10:07:29.286 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:30 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:30 localhost nova_compute[281613]: 2025-11-23 10:07:30.847 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:07:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:07:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:07:32 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:32.025 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:07:32 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:32.026 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:07:32 localhost nova_compute[281613]: 2025-11-23 10:07:32.094 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e246 e246: 6 total, 6 up, 6 in
Nov 23 05:07:33 localhost nova_compute[281613]: 2025-11-23 10:07:33.818 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:07:34 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/174658289' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:07:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:07:34 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/174658289' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:07:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e247 e247: 6 total, 6 up, 6 in
Nov 23 05:07:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:35 localhost nova_compute[281613]: 2025-11-23 10:07:35.849 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:36 localhost ovn_metadata_agent[159423]: 2025-11-23 10:07:36.028 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:07:37 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:37 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:37 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:07:37 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:07:38 localhost nova_compute[281613]: 2025-11-23 10:07:38.820 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:39 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:07:39 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:07:39 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:07:39 localhost podman[322236]: 2025-11-23 10:07:39.622260019 +0000 UTC m=+0.068189569 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:07:39 localhost nova_compute[281613]: 2025-11-23 10:07:39.622 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:40 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:40 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:40 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:40 localhost nova_compute[281613]: 2025-11-23 10:07:40.853 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:41 localhost podman[240144]: time="2025-11-23T10:07:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:07:41 localhost podman[240144]: @ - - [23/Nov/2025:10:07:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:07:41 localhost podman[240144]: @ - - [23/Nov/2025:10:07:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19220 "" "Go-http-client/1.1"
Nov 23 05:07:42 localhost nova_compute[281613]: 2025-11-23 10:07:42.989 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:43 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:07:43 localhost podman[322273]: 2025-11-23 10:07:43.002834132 +0000 UTC m=+0.086645716 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:07:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:07:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:07:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e248 e248: 6 total, 6 up, 6 in
Nov 23 05:07:43 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:43 localhost nova_compute[281613]: 2025-11-23 10:07:43.822 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e249 e249: 6 total, 6 up, 6 in
Nov 23 05:07:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:07:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:07:44 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:07:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:45 localhost nova_compute[281613]: 2025-11-23 10:07:45.857 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:07:46 localhost podman[322291]: 2025-11-23 10:07:46.172361107 +0000 UTC m=+0.079951755 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, architecture=x86_64, container_name=openstack_network_exporter)
Nov 23 05:07:46 localhost podman[322291]: 2025-11-23 10:07:46.186937679 +0000 UTC m=+0.094528337 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Nov 23 05:07:46 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:07:46 localhost podman[322292]: 2025-11-23 10:07:46.278152368 +0000 UTC m=+0.181398469 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:07:46 localhost podman[322292]: 2025-11-23 10:07:46.293882482 +0000 UTC m=+0.197128573 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:07:46 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:07:46 localhost systemd[1]: tmp-crun.RnXbyz.mount: Deactivated successfully.
Nov 23 05:07:46 localhost podman[322293]: 2025-11-23 10:07:46.383732164 +0000 UTC m=+0.282839183 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:07:46 localhost podman[322293]: 2025-11-23 10:07:46.395038448 +0000 UTC m=+0.294145477 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:07:46 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:07:46 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e250 e250: 6 total, 6 up, 6 in
Nov 23 05:07:47 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:47 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:47 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:07:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:07:48 localhost nova_compute[281613]: 2025-11-23 10:07:48.823 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:49 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:07:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:50 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:07:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:07:50 localhost nova_compute[281613]: 2025-11-23 10:07:50.860 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: ERROR   10:07:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: ERROR   10:07:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: ERROR   10:07:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: ERROR   10:07:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: ERROR   10:07:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:07:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:07:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e251 e251: 6 total, 6 up, 6 in
Nov 23 05:07:53 localhost nova_compute[281613]: 2025-11-23 10:07:53.868 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:07:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:07:54 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 23 05:07:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:07:55 localhost nova_compute[281613]: 2025-11-23 10:07:55.866 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:07:57 localhost systemd[1]: tmp-crun.2tZQVX.mount: Deactivated successfully.
Nov 23 05:07:57 localhost podman[322436]: 2025-11-23 10:07:57.205726343 +0000 UTC m=+0.104847127 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 05:07:57 localhost podman[322439]: 2025-11-23 10:07:57.240704805 +0000 UTC m=+0.131817314 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 05:07:57 localhost podman[322437]: 2025-11-23 10:07:57.257906408 +0000 UTC m=+0.153110396 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 05:07:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:07:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:07:57 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:07:57 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:07:57 localhost podman[322436]: 2025-11-23 10:07:57.295846651 +0000 UTC m=+0.194967445 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 23 05:07:57 localhost podman[322438]: 2025-11-23 10:07:57.305723187 +0000 UTC m=+0.197075782 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:07:57 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:07:57 localhost podman[322439]: 2025-11-23 10:07:57.314405251 +0000 UTC m=+0.205517730 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 23 05:07:57 localhost podman[322437]: 2025-11-23 10:07:57.325966233 +0000 UTC m=+0.221170231 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:07:57 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:07:57 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:07:57 localhost podman[322438]: 2025-11-23 10:07:57.369505416 +0000 UTC m=+0.260857971 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:07:57 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:07:58 localhost nova_compute[281613]: 2025-11-23 10:07:58.869 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:07:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e252 e252: 6 total, 6 up, 6 in
Nov 23 05:07:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:00 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.367088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480367162, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1380, "num_deletes": 254, "total_data_size": 1935128, "memory_usage": 1960552, "flush_reason": "Manual Compaction"}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480374849, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1268743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28916, "largest_seqno": 30291, "table_properties": {"data_size": 1262817, "index_size": 3076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15367, "raw_average_key_size": 21, "raw_value_size": 1249959, "raw_average_value_size": 1775, "num_data_blocks": 133, "num_entries": 704, "num_filter_entries": 704, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892428, "oldest_key_time": 1763892428, "file_creation_time": 1763892480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 7855 microseconds, and 3934 cpu microseconds.
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.374941) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1268743 bytes OK
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.374977) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.377327) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.377352) EVENT_LOG_v1 {"time_micros": 1763892480377345, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.377375) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1928106, prev total WAL file size 1928106, number of live WAL files 2.
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.378059) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end)
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1239KB)], [48(17MB)]
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480378094, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 20063175, "oldest_snapshot_seqno": -1}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 13680 keys, 18455345 bytes, temperature: kUnknown
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480440693, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 18455345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18376010, "index_size": 44034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34245, "raw_key_size": 368983, "raw_average_key_size": 26, "raw_value_size": 18141834, "raw_average_value_size": 1326, "num_data_blocks": 1637, "num_entries": 13680, "num_filter_entries": 13680, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.440985) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 18455345 bytes
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.443451) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 320.0 rd, 294.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.9 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(30.4) write-amplify(14.5) OK, records in: 14215, records dropped: 535 output_compression: NoCompression
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.443474) EVENT_LOG_v1 {"time_micros": 1763892480443465, "job": 28, "event": "compaction_finished", "compaction_time_micros": 62699, "compaction_time_cpu_micros": 33614, "output_level": 6, "num_output_files": 1, "total_output_size": 18455345, "num_input_records": 14215, "num_output_records": 13680, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480443744, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892480445391, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.377984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.445432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.445440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.445444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.445448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:08:00.445451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:08:00 localhost nova_compute[281613]: 2025-11-23 10:08:00.866 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:01 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:01 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:01 localhost sshd[322517]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:08:03 localhost nova_compute[281613]: 2025-11-23 10:08:03.871 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:03 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:08:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:08:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:05 localhost nova_compute[281613]: 2025-11-23 10:08:05.869 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e253 e253: 6 total, 6 up, 6 in
Nov 23 05:08:08 localhost nova_compute[281613]: 2025-11-23 10:08:08.872 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e254 e254: 6 total, 6 up, 6 in
Nov 23 05:08:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:09.274 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:08:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:09.274 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:08:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:09.275 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:08:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:10 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:10 localhost nova_compute[281613]: 2025-11-23 10:08:10.873 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:11 localhost podman[240144]: time="2025-11-23T10:08:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:08:11 localhost podman[240144]: @ - - [23/Nov/2025:10:08:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:08:11 localhost podman[240144]: @ - - [23/Nov/2025:10:08:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19216 "" "Go-http-client/1.1"
Nov 23 05:08:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:08:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:08:12 localhost sshd[322519]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:08:13 localhost ovn_controller[153786]: 2025-11-23T10:08:13Z|00212|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 23 05:08:13 localhost nova_compute[281613]: 2025-11-23 10:08:13.875 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:15 localhost nova_compute[281613]: 2025-11-23 10:08:15.875 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:17 localhost nova_compute[281613]: 2025-11-23 10:08:17.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:17 localhost nova_compute[281613]: 2025-11-23 10:08:17.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:08:17 localhost systemd[1]: tmp-crun.brH1ku.mount: Deactivated successfully.
Nov 23 05:08:17 localhost podman[322521]: 2025-11-23 10:08:17.2554131 +0000 UTC m=+0.153334473 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 23 05:08:17 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:17 localhost podman[322521]: 2025-11-23 10:08:17.297970637 +0000 UTC m=+0.195891980 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, architecture=x86_64)
Nov 23 05:08:17 localhost podman[322523]: 2025-11-23 10:08:17.309436906 +0000 UTC m=+0.201320157 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:08:17 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:08:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:08:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:08:17 localhost podman[322523]: 2025-11-23 10:08:17.322042616 +0000 UTC m=+0.213925827 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:08:17 localhost podman[322522]: 2025-11-23 10:08:17.223013747 +0000 UTC m=+0.118495194 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:08:17 localhost podman[322522]: 2025-11-23 10:08:17.356987937 +0000 UTC m=+0.252469334 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 23 05:08:17 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:08:17 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:08:18 localhost nova_compute[281613]: 2025-11-23 10:08:18.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:18 localhost systemd[1]: tmp-crun.kQjuw6.mount: Deactivated successfully.
Nov 23 05:08:18 localhost nova_compute[281613]: 2025-11-23 10:08:18.921 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e255 e255: 6 total, 6 up, 6 in
Nov 23 05:08:19 localhost nova_compute[281613]: 2025-11-23 10:08:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:19 localhost nova_compute[281613]: 2025-11-23 10:08:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:20 localhost nova_compute[281613]: 2025-11-23 10:08:20.878 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:21 localhost nova_compute[281613]: 2025-11-23 10:08:21.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:21 localhost nova_compute[281613]: 2025-11-23 10:08:21.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:08:21 localhost nova_compute[281613]: 2025-11-23 10:08:21.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:08:21 localhost nova_compute[281613]: 2025-11-23 10:08:21.039 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: ERROR   10:08:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: ERROR   10:08:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: ERROR   10:08:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: ERROR   10:08:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: ERROR   10:08:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:08:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:08:23 localhost nova_compute[281613]: 2025-11-23 10:08:23.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:23 localhost nova_compute[281613]: 2025-11-23 10:08:23.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:23 localhost nova_compute[281613]: 2025-11-23 10:08:23.926 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.038 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.039 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.039 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.039 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.040 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:08:24 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e256 e256: 6 total, 6 up, 6 in
Nov 23 05:08:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:08:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:08:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:08:24 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2263582808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.531 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.781 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.783 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11531MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.783 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:08:24 localhost nova_compute[281613]: 2025-11-23 10:08:24.784 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:08:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.052 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.052 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.135 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.259 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.260 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.294 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.335 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.365 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:08:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:08:25 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3762470076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.821 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.828 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.842 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.861 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.861 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.862 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.863 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.871 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 05:08:25 localhost nova_compute[281613]: 2025-11-23 10:08:25.880 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:08:28 localhost podman[322627]: 2025-11-23 10:08:28.186874631 +0000 UTC m=+0.087262972 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 23 05:08:28 localhost systemd[1]: tmp-crun.YwiMVe.mount: Deactivated successfully.
Nov 23 05:08:28 localhost podman[322627]: 2025-11-23 10:08:28.292504238 +0000 UTC m=+0.192892649 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 23 05:08:28 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:08:28 localhost podman[322630]: 2025-11-23 10:08:28.309752572 +0000 UTC m=+0.203151056 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:08:28 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:28 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:28 localhost podman[322629]: 2025-11-23 10:08:28.343746428 +0000 UTC m=+0.238390045 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:08:28 localhost podman[322628]: 2025-11-23 10:08:28.274233585 +0000 UTC m=+0.171786670 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 05:08:28 localhost podman[322630]: 2025-11-23 10:08:28.381530846 +0000 UTC m=+0.274929400 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 23 05:08:28 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:08:28 localhost podman[322628]: 2025-11-23 10:08:28.404397393 +0000 UTC m=+0.301950428 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:08:28 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:08:28 localhost podman[322629]: 2025-11-23 10:08:28.431818791 +0000 UTC m=+0.326462438 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:08:28 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:08:28 localhost nova_compute[281613]: 2025-11-23 10:08:28.927 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:30 localhost nova_compute[281613]: 2025-11-23 10:08:30.884 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:30 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:08:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:08:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:08:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019", "osd", "allow rw pool=manila_data namespace=fsvolumens_f99c71c2-29eb-4c61-ab24-07800f6e4b24", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:33 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 e257: 6 total, 6 up, 6 in
Nov 23 05:08:33 localhost nova_compute[281613]: 2025-11-23 10:08:33.929 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:08:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:35 localhost nova_compute[281613]: 2025-11-23 10:08:35.891 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:36 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/f99c71c2-29eb-4c61-ab24-07800f6e4b24/458725ea-f0f9-482a-b03b-0e03f2cd7019],prefix=session evict} (starting...)
Nov 23 05:08:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:08:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:08:37 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:08:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:08:38 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:08:38 localhost nova_compute[281613]: 2025-11-23 10:08:38.931 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:40 localhost nova_compute[281613]: 2025-11-23 10:08:40.893 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:41 localhost podman[240144]: time="2025-11-23T10:08:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:08:41 localhost podman[240144]: @ - - [23/Nov/2025:10:08:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:08:41 localhost podman[240144]: @ - - [23/Nov/2025:10:08:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19225 "" "Go-http-client/1.1"
Nov 23 05:08:41 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:41 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:41 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751", "osd", "allow rw pool=manila_data namespace=fsvolumens_0066f586-ad80-42ee-9cb5-57bd65fc15e2", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:43 localhost nova_compute[281613]: 2025-11-23 10:08:43.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:43 localhost nova_compute[281613]: 2025-11-23 10:08:43.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 05:08:43 localhost nova_compute[281613]: 2025-11-23 10:08:43.933 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:44 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:45 localhost nova_compute[281613]: 2025-11-23 10:08:45.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:08:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:08:45 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:08:45 localhost nova_compute[281613]: 2025-11-23 10:08:45.894 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:46 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/0066f586-ad80-42ee-9cb5-57bd65fc15e2/7acd4543-b091-4fe9-abe4-01b783b9e751],prefix=session evict} (starting...)
Nov 23 05:08:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:08:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:08:46 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:46.889 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:08:46 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:46.890 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:08:46 localhost nova_compute[281613]: 2025-11-23 10:08:46.933 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:08:48 localhost systemd[1]: tmp-crun.3qTl7v.mount: Deactivated successfully.
Nov 23 05:08:48 localhost podman[322714]: 2025-11-23 10:08:48.196422476 +0000 UTC m=+0.098983919 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Nov 23 05:08:48 localhost podman[322714]: 2025-11-23 10:08:48.238949191 +0000 UTC m=+0.141510614 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 05:08:48 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:08:48 localhost podman[322716]: 2025-11-23 10:08:48.304999991 +0000 UTC m=+0.201433458 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:08:48 localhost podman[322715]: 2025-11-23 10:08:48.252452656 +0000 UTC m=+0.152836030 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 23 05:08:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:48 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:48 localhost podman[322715]: 2025-11-23 10:08:48.376304143 +0000 UTC m=+0.276687437 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:08:48 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:08:48 localhost podman[322716]: 2025-11-23 10:08:48.427498762 +0000 UTC m=+0.323932209 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:08:48 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:08:48 localhost nova_compute[281613]: 2025-11-23 10:08:48.935 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:08:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:08:50 localhost nova_compute[281613]: 2025-11-23 10:08:50.898 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:51 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:51 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:08:51 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:08:51 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: ERROR   10:08:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: ERROR   10:08:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: ERROR   10:08:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: ERROR   10:08:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: ERROR   10:08:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:08:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:08:52 localhost ovn_metadata_agent[159423]: 2025-11-23 10:08:52.892 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:08:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5", "osd", "allow rw pool=manila_data namespace=fsvolumens_01ed1422-dec9-4d44-991d-9583c95296ac", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:53 localhost nova_compute[281613]: 2025-11-23 10:08:53.939 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:08:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:08:55 localhost nova_compute[281613]: 2025-11-23 10:08:55.900 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:56 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/01ed1422-dec9-4d44-991d-9583c95296ac/7b7be716-7e6a-49c4-a977-3707f1cc08b5],prefix=session evict} (starting...)
Nov 23 05:08:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:08:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:08:57 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:08:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:08:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:08:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:08:58 localhost nova_compute[281613]: 2025-11-23 10:08:58.941 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:08:59 localhost systemd[1]: tmp-crun.Tm961Y.mount: Deactivated successfully.
Nov 23 05:08:59 localhost podman[322863]: 2025-11-23 10:08:59.205156289 +0000 UTC m=+0.103578243 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 05:08:59 localhost podman[322862]: 2025-11-23 10:08:59.244594912 +0000 UTC m=+0.143117638 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 23 05:08:59 localhost podman[322862]: 2025-11-23 10:08:59.252258218 +0000 UTC m=+0.150780944 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:08:59 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:08:59 localhost podman[322863]: 2025-11-23 10:08:59.297962181 +0000 UTC m=+0.196384165 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:08:59 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:08:59 localhost podman[322865]: 2025-11-23 10:08:59.345442169 +0000 UTC m=+0.237664555 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:08:59 localhost podman[322864]: 2025-11-23 10:08:59.299696847 +0000 UTC m=+0.196139867 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:08:59 localhost podman[322864]: 2025-11-23 10:08:59.386008013 +0000 UTC m=+0.282451023 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:08:59 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:08:59 localhost podman[322865]: 2025-11-23 10:08:59.412998411 +0000 UTC m=+0.305220827 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:08:59 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:08:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:08:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:08:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:08:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:09:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1447695815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:09:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:09:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1447695815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:09:00 localhost nova_compute[281613]: 2025-11-23 10:09:00.902 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:01 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:09:01 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:01 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:03 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997],prefix=session evict} (starting...)
Nov 23 05:09:03 localhost nova_compute[281613]: 2025-11-23 10:09:03.943 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:03 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:03 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:09:03 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:09:04 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Nov 23 05:09:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Nov 23 05:09:04 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 23 05:09:05 localhost nova_compute[281613]: 2025-11-23 10:09:05.904 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:07 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:08 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:09:08 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:08 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:08 localhost nova_compute[281613]: 2025-11-23 10:09:08.945 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:09.274 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:09:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:09.275 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:09:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:09.275 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:09:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:09 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997],prefix=session evict} (starting...)
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:09:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:09:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:09:10 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:09:10 localhost nova_compute[281613]: 2025-11-23 10:09:10.906 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:11 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:11 localhost podman[240144]: time="2025-11-23T10:09:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:09:11 localhost podman[240144]: @ - - [23/Nov/2025:10:09:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:09:11 localhost podman[240144]: @ - - [23/Nov/2025:10:09:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1"
Nov 23 05:09:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:09:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:09:11 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:09:13 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:13 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:13 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:13 localhost nova_compute[281613]: 2025-11-23 10:09:13.947 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:09:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:14 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:15 localhost nova_compute[281613]: 2025-11-23 10:09:15.909 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:16 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997],prefix=session evict} (starting...)
Nov 23 05:09:16 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:16 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:09:16 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:09:17 localhost nova_compute[281613]: 2025-11-23 10:09:17.027 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:17 localhost nova_compute[281613]: 2025-11-23 10:09:17.028 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:09:17 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Nov 23 05:09:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Nov 23 05:09:17 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 23 05:09:18 localhost nova_compute[281613]: 2025-11-23 10:09:18.949 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:19 localhost nova_compute[281613]: 2025-11-23 10:09:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:09:19 localhost podman[322948]: 2025-11-23 10:09:19.190229163 +0000 UTC m=+0.094858075 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 23 05:09:19 localhost podman[322947]: 2025-11-23 10:09:19.2401895 +0000 UTC m=+0.145901252 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, config_id=edpm)
Nov 23 05:09:19 localhost podman[322947]: 2025-11-23 10:09:19.282015037 +0000 UTC m=+0.187726759 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Nov 23 05:09:19 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:09:19 localhost podman[322948]: 2025-11-23 10:09:19.30621246 +0000 UTC m=+0.210841282 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 23 05:09:19 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:09:19 localhost podman[322949]: 2025-11-23 10:09:19.28692026 +0000 UTC m=+0.190266028 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:09:19 localhost podman[322949]: 2025-11-23 10:09:19.365572129 +0000 UTC m=+0.268917917 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:09:19 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:09:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:20 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1431575460", "caps": ["mds", "allow rw path=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997", "osd", "allow rw pool=manila_data namespace=fsvolumens_561ab685-42f2-4920-b91d-2420296f93f0", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:20 localhost nova_compute[281613]: 2025-11-23 10:09:20.912 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.032 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.032 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:21 localhost nova_compute[281613]: 2025-11-23 10:09:21.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:21 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:09:21 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:21 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: ERROR   10:09:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: ERROR   10:09:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: ERROR   10:09:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: ERROR   10:09:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: ERROR   10:09:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:09:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:09:23 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-1431575460,client_metadata.root=/volumes/_nogroup/561ab685-42f2-4920-b91d-2420296f93f0/e7620f1a-9169-43a2-9a7b-ca15e95e1997],prefix=session evict} (starting...)
Nov 23 05:09:23 localhost nova_compute[281613]: 2025-11-23 10:09:23.950 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:24 localhost nova_compute[281613]: 2025-11-23 10:09:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1431575460", "format": "json"} : dispatch
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"} : dispatch
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1431575460"}]': finished
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:09:24 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:09:24 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.035 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.036 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.036 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.036 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.037 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:09:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:09:25 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2165963606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.479 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.684 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.686 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11507MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.687 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.688 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.776 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.777 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.801 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:09:25 localhost nova_compute[281613]: 2025-11-23 10:09:25.914 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:26 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:09:26 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2697729236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:09:26 localhost nova_compute[281613]: 2025-11-23 10:09:26.253 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:09:26 localhost nova_compute[281613]: 2025-11-23 10:09:26.259 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:09:26 localhost nova_compute[281613]: 2025-11-23 10:09:26.279 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:09:26 localhost nova_compute[281613]: 2025-11-23 10:09:26.281 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:09:26 localhost nova_compute[281613]: 2025-11-23 10:09:26.282 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:09:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:09:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:27 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow r pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:28 localhost nova_compute[281613]: 2025-11-23 10:09:28.952 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:09:30 localhost podman[323056]: 2025-11-23 10:09:30.181128388 +0000 UTC m=+0.083775169 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:09:30 localhost podman[323056]: 2025-11-23 10:09:30.18899995 +0000 UTC m=+0.091646731 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:09:30 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:09:30 localhost podman[323055]: 2025-11-23 10:09:30.243394276 +0000 UTC m=+0.146411376 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 23 05:09:30 localhost nova_compute[281613]: 2025-11-23 10:09:30.279 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:09:30 localhost podman[323062]: 2025-11-23 10:09:30.289008755 +0000 UTC m=+0.184955555 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 05:09:30 localhost podman[323055]: 2025-11-23 10:09:30.36524343 +0000 UTC m=+0.268260510 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:09:30 localhost podman[323054]: 2025-11-23 10:09:30.381067436 +0000 UTC m=+0.287684154 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 23 05:09:30 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:09:30 localhost podman[323054]: 2025-11-23 10:09:30.393222753 +0000 UTC m=+0.299839471 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:09:30 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:09:30 localhost podman[323062]: 2025-11-23 10:09:30.433594372 +0000 UTC m=+0.329541172 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 23 05:09:30 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:09:30 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:30 localhost nova_compute[281613]: 2025-11-23 10:09:30.917 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Nov 23 05:09:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Nov 23 05:09:31 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 23 05:09:33 localhost nova_compute[281613]: 2025-11-23 10:09:33.953 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Nov 23 05:09:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:09:34 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:09:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:35 localhost nova_compute[281613]: 2025-11-23 10:09:35.920 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:38 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e258 e258: 6 total, 6 up, 6 in
Nov 23 05:09:38 localhost nova_compute[281613]: 2025-11-23 10:09:38.955 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:40 localhost nova_compute[281613]: 2025-11-23 10:09:40.922 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:41 localhost podman[240144]: time="2025-11-23T10:09:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:09:41 localhost podman[240144]: @ - - [23/Nov/2025:10:09:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:09:41 localhost podman[240144]: @ - - [23/Nov/2025:10:09:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19218 "" "Go-http-client/1.1"
Nov 23 05:09:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Nov 23 05:09:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]} : dispatch
Nov 23 05:09:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e259 e259: 6 total, 6 up, 6 in
Nov 23 05:09:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6,allow rw path=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4,allow rw pool=manila_data namespace=fsvolumens_6734401e-5573-4709-85e4-c69140f6c86e"]}]': finished
Nov 23 05:09:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Nov 23 05:09:43 localhost nova_compute[281613]: 2025-11-23 10:09:43.958 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:45 localhost nova_compute[281613]: 2025-11-23 10:09:45.926 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:45 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/6734401e-5573-4709-85e4-c69140f6c86e/7aacd64e-4c3a-45c6-ac2d-a36c3c0cca66],prefix=session evict} (starting...)
Nov 23 05:09:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Nov 23 05:09:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]} : dispatch
Nov 23 05:09:46 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6", "osd", "allow rw pool=manila_data namespace=fsvolumens_c01bfd1f-c2a4-4505-b9fb-5784100d38f4"]}]': finished
Nov 23 05:09:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e260 e260: 6 total, 6 up, 6 in
Nov 23 05:09:48 localhost nova_compute[281613]: 2025-11-23 10:09:48.960 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:49 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/c01bfd1f-c2a4-4505-b9fb-5784100d38f4/80c7ebbb-31aa-44c5-8825-14a5c437eff6],prefix=session evict} (starting...)
Nov 23 05:09:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:09:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:09:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:09:49 localhost podman[323155]: 2025-11-23 10:09:49.870216056 +0000 UTC m=+0.092330139 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:09:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:49 localhost systemd[1]: tmp-crun.JLWAiy.mount: Deactivated successfully.
Nov 23 05:09:49 localhost podman[323154]: 2025-11-23 10:09:49.93790045 +0000 UTC m=+0.163430685 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Nov 23 05:09:49 localhost podman[323154]: 2025-11-23 10:09:49.948623219 +0000 UTC m=+0.174153514 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 23 05:09:49 localhost podman[323155]: 2025-11-23 10:09:49.953290966 +0000 UTC m=+0.175405109 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:09:49 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:09:50 localhost podman[323152]: 2025-11-23 10:09:50.049259852 +0000 UTC m=+0.276623456 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 23 05:09:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:09:50 localhost podman[323152]: 2025-11-23 10:09:50.088862069 +0000 UTC m=+0.316225623 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 23 05:09:50 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:09:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Nov 23 05:09:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Nov 23 05:09:50 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 23 05:09:50 localhost nova_compute[281613]: 2025-11-23 10:09:50.927 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:51 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:09:51 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: ERROR   10:09:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: ERROR   10:09:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: ERROR   10:09:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: ERROR   10:09:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: ERROR   10:09:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:09:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:09:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e261 e261: 6 total, 6 up, 6 in
Nov 23 05:09:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:53.508 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:09:53 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:53.510 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:09:53 localhost nova_compute[281613]: 2025-11-23 10:09:53.559 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:53 localhost nova_compute[281613]: 2025-11-23 10:09:53.963 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:09:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:09:55 localhost nova_compute[281613]: 2025-11-23 10:09:55.929 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e262 e262: 6 total, 6 up, 6 in
Nov 23 05:09:58 localhost ovn_metadata_agent[159423]: 2025-11-23 10:09:58.511 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:09:58 localhost nova_compute[281613]: 2025-11-23 10:09:58.964 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:09:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:00 localhost ceph-mon[302802]: overall HEALTH_OK
Nov 23 05:10:00 localhost nova_compute[281613]: 2025-11-23 10:10:00.932 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:10:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:10:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:10:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:10:01 localhost podman[323282]: 2025-11-23 10:10:01.185161503 +0000 UTC m=+0.088043583 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 23 05:10:01 localhost podman[323283]: 2025-11-23 10:10:01.250729251 +0000 UTC m=+0.151637699 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:10:01 localhost podman[323282]: 2025-11-23 10:10:01.270307298 +0000 UTC m=+0.173189378 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:10:01 localhost podman[323283]: 2025-11-23 10:10:01.29002039 +0000 UTC m=+0.190928868 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 23 05:10:01 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:10:01 localhost podman[323285]: 2025-11-23 10:10:01.315109316 +0000 UTC m=+0.209318752 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:10:01 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:10:01 localhost podman[323285]: 2025-11-23 10:10:01.354521548 +0000 UTC m=+0.248730944 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 05:10:01 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:10:01 localhost podman[323284]: 2025-11-23 10:10:01.409303734 +0000 UTC m=+0.306534472 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:10:01 localhost podman[323284]: 2025-11-23 10:10:01.447964866 +0000 UTC m=+0.345195624 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:10:01 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:10:02 localhost systemd[1]: tmp-crun.VPOZVz.mount: Deactivated successfully.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.431673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602431715, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2770, "num_deletes": 256, "total_data_size": 3083694, "memory_usage": 3144976, "flush_reason": "Manual Compaction"}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602442450, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1997751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30296, "largest_seqno": 33061, "table_properties": {"data_size": 1987130, "index_size": 6490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27052, "raw_average_key_size": 22, "raw_value_size": 1963988, "raw_average_value_size": 1608, "num_data_blocks": 281, "num_entries": 1221, "num_filter_entries": 1221, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892480, "oldest_key_time": 1763892480, "file_creation_time": 1763892602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10829 microseconds, and 5997 cpu microseconds.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.442501) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1997751 bytes OK
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.442523) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445085) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445104) EVENT_LOG_v1 {"time_micros": 1763892602445098, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445123) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3070480, prev total WAL file size 3070480, number of live WAL files 2.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.446019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end)
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1950KB)], [51(17MB)]
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602446079, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 20453096, "oldest_snapshot_seqno": -1}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14368 keys, 18624726 bytes, temperature: kUnknown
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602531923, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 18624726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18540110, "index_size": 47587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 385535, "raw_average_key_size": 26, "raw_value_size": 18293252, "raw_average_value_size": 1273, "num_data_blocks": 1778, "num_entries": 14368, "num_filter_entries": 14368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.532266) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 18624726 bytes
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.533804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.9 rd, 216.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 17.6 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(19.6) write-amplify(9.3) OK, records in: 14901, records dropped: 533 output_compression: NoCompression
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.533831) EVENT_LOG_v1 {"time_micros": 1763892602533818, "job": 30, "event": "compaction_finished", "compaction_time_micros": 85969, "compaction_time_cpu_micros": 53454, "output_level": 6, "num_output_files": 1, "total_output_size": 18624726, "num_input_records": 14901, "num_output_records": 14368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602534229, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892602536868, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.445914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.536970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.536978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.536981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.536984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:02 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:10:02.536987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:10:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e263 e263: 6 total, 6 up, 6 in
Nov 23 05:10:03 localhost nova_compute[281613]: 2025-11-23 10:10:03.965 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:05 localhost nova_compute[281613]: 2025-11-23 10:10:05.935 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:08 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e264 e264: 6 total, 6 up, 6 in
Nov 23 05:10:08 localhost nova_compute[281613]: 2025-11-23 10:10:08.966 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:10:09.277 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:10:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:10:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:10:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:10:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:10:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:10 localhost nova_compute[281613]: 2025-11-23 10:10:10.939 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:11 localhost podman[240144]: time="2025-11-23T10:10:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:10:11 localhost podman[240144]: @ - - [23/Nov/2025:10:10:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:10:11 localhost podman[240144]: @ - - [23/Nov/2025:10:10:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19229 "" "Go-http-client/1.1"
Nov 23 05:10:13 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e265 e265: 6 total, 6 up, 6 in
Nov 23 05:10:13 localhost nova_compute[281613]: 2025-11-23 10:10:13.967 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:15 localhost nova_compute[281613]: 2025-11-23 10:10:15.940 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:15 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Nov 23 05:10:15 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:10:15 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736", "osd", "allow rw pool=manila_data namespace=fsvolumens_346654a1-8043-457a-94b6-3b076c21a1d5", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:10:18 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e266 e266: 6 total, 6 up, 6 in
Nov 23 05:10:18 localhost nova_compute[281613]: 2025-11-23 10:10:18.970 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:19 localhost nova_compute[281613]: 2025-11-23 10:10:19.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:19 localhost nova_compute[281613]: 2025-11-23 10:10:19.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:10:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:20 localhost nova_compute[281613]: 2025-11-23 10:10:20.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:10:20 localhost podman[323365]: 2025-11-23 10:10:20.18994747 +0000 UTC m=+0.091939608 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:10:20 localhost podman[323365]: 2025-11-23 10:10:20.234065499 +0000 UTC m=+0.136057617 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 23 05:10:20 localhost systemd[1]: tmp-crun.3Ijsou.mount: Deactivated successfully.
Nov 23 05:10:20 localhost podman[323366]: 2025-11-23 10:10:20.245896518 +0000 UTC m=+0.143689332 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:10:20 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:10:20 localhost podman[323366]: 2025-11-23 10:10:20.278215849 +0000 UTC m=+0.176008673 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:10:20 localhost podman[323391]: 2025-11-23 10:10:20.289046771 +0000 UTC m=+0.096911273 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Nov 23 05:10:20 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:10:20 localhost podman[323391]: 2025-11-23 10:10:20.304845947 +0000 UTC m=+0.112710509 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 05:10:20 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:10:20 localhost nova_compute[281613]: 2025-11-23 10:10:20.943 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:22 localhost nova_compute[281613]: 2025-11-23 10:10:22.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:22 localhost nova_compute[281613]: 2025-11-23 10:10:22.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: ERROR   10:10:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: ERROR   10:10:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: ERROR   10:10:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: ERROR   10:10:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: ERROR   10:10:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:10:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:10:23 localhost nova_compute[281613]: 2025-11-23 10:10:23.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:23 localhost nova_compute[281613]: 2025-11-23 10:10:23.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:10:23 localhost nova_compute[281613]: 2025-11-23 10:10:23.020 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:10:23 localhost nova_compute[281613]: 2025-11-23 10:10:23.039 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:10:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 e267: 6 total, 6 up, 6 in
Nov 23 05:10:23 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Nov 23 05:10:23 localhost nova_compute[281613]: 2025-11-23 10:10:23.972 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:24 localhost nova_compute[281613]: 2025-11-23 10:10:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.015 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.035 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.035 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.035 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.036 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.036 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:10:25 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch
Nov 23 05:10:25 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:10:25 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/953996953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.503 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.712 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.714 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11503MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.715 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.715 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.789 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.789 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.810 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:10:25 localhost nova_compute[281613]: 2025-11-23 10:10:25.945 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:26 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:10:26 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1750128800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:10:26 localhost nova_compute[281613]: 2025-11-23 10:10:26.223 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:10:26 localhost nova_compute[281613]: 2025-11-23 10:10:26.230 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:10:26 localhost nova_compute[281613]: 2025-11-23 10:10:26.383 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:10:26 localhost nova_compute[281613]: 2025-11-23 10:10:26.387 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:10:26 localhost nova_compute[281613]: 2025-11-23 10:10:26.387 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:10:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:10:26 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-272049507", "caps": ["mds", "allow rw path=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb", "osd", "allow rw pool=manila_data namespace=fsvolumens_b9ef5055-ce0d-4f29-b449-58f39c1f00af", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:10:28 localhost nova_compute[281613]: 2025-11-23 10:10:28.388 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:10:28 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb],prefix=session evict} (starting...)
Nov 23 05:10:28 localhost nova_compute[281613]: 2025-11-23 10:10:28.974 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:29 localhost ceph-mgr[287623]: client.0 ms_handle_reset on v2:172.18.0.106:6810/2037590349
Nov 23 05:10:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:30 localhost nova_compute[281613]: 2025-11-23 10:10:30.949 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:10:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:10:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:10:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:10:32 localhost systemd[1]: tmp-crun.aWlpj0.mount: Deactivated successfully.
Nov 23 05:10:32 localhost podman[323474]: 2025-11-23 10:10:32.205641849 +0000 UTC m=+0.100010415 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 05:10:32 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=tempest-cephx-id-272049507,client_metadata.root=/volumes/_nogroup/b9ef5055-ce0d-4f29-b449-58f39c1f00af/ad7774b9-dfc0-45be-aa87-07b19521a8eb],prefix=session evict} (starting...)
Nov 23 05:10:32 localhost podman[323474]: 2025-11-23 10:10:32.219862234 +0000 UTC m=+0.114230850 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:10:32 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:10:32 localhost podman[323481]: 2025-11-23 10:10:32.263517779 +0000 UTC m=+0.147669420 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 05:10:32 localhost podman[323475]: 2025-11-23 10:10:32.351137831 +0000 UTC m=+0.241574371 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:10:32 localhost podman[323481]: 2025-11-23 10:10:32.358962452 +0000 UTC m=+0.243114073 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 23 05:10:32 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:10:32 localhost podman[323475]: 2025-11-23 10:10:32.38489951 +0000 UTC m=+0.275336040 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:10:32 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:10:32 localhost podman[323473]: 2025-11-23 10:10:32.360882234 +0000 UTC m=+0.258703932 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 23 05:10:32 localhost podman[323473]: 2025-11-23 10:10:32.44388688 +0000 UTC m=+0.341708558 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 23 05:10:32 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:10:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-272049507", "format": "json"} : dispatch
Nov 23 05:10:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"} : dispatch
Nov 23 05:10:32 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-272049507"}]': finished
Nov 23 05:10:33 localhost nova_compute[281613]: 2025-11-23 10:10:33.976 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:35 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/346654a1-8043-457a-94b6-3b076c21a1d5/cb129bf7-168e-4c36-9f32-82b35d885736],prefix=session evict} (starting...)
Nov 23 05:10:35 localhost nova_compute[281613]: 2025-11-23 10:10:35.951 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Nov 23 05:10:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Nov 23 05:10:36 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 23 05:10:38 localhost nova_compute[281613]: 2025-11-23 10:10:38.977 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:39 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Nov 23 05:10:39 localhost sshd[323554]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:10:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:40 localhost nova_compute[281613]: 2025-11-23 10:10:40.953 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:41 localhost podman[240144]: time="2025-11-23T10:10:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:10:41 localhost podman[240144]: @ - - [23/Nov/2025:10:10:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:10:41 localhost podman[240144]: @ - - [23/Nov/2025:10:10:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19229 "" "Go-http-client/1.1"
Nov 23 05:10:42 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Nov 23 05:10:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"} : dispatch
Nov 23 05:10:43 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2", "osd", "allow rw pool=manila_data namespace=fsvolumens_73463559-de38-4e65-91fe-e256e1993ef1", "mon", "allow r"], "format": "json"}]': finished
Nov 23 05:10:43 localhost nova_compute[281613]: 2025-11-23 10:10:43.979 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:45 localhost nova_compute[281613]: 2025-11-23 10:10:45.955 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:48 localhost nova_compute[281613]: 2025-11-23 10:10:48.981 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:49 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Nov 23 05:10:49 localhost sshd[323556]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:10:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:10:50 localhost podman[323559]: 2025-11-23 10:10:50.751666094 +0000 UTC m=+0.086148633 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 05:10:50 localhost podman[323559]: 2025-11-23 10:10:50.76341522 +0000 UTC m=+0.097897809 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 23 05:10:50 localhost systemd[1]: tmp-crun.7XROwd.mount: Deactivated successfully.
Nov 23 05:10:50 localhost podman[323558]: 2025-11-23 10:10:50.802632767 +0000 UTC m=+0.139902231 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 05:10:50 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:10:50 localhost podman[323560]: 2025-11-23 10:10:50.862959822 +0000 UTC m=+0.194379659 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:10:50 localhost podman[323558]: 2025-11-23 10:10:50.869575911 +0000 UTC m=+0.206845405 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Nov 23 05:10:50 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:10:50 localhost podman[323560]: 2025-11-23 10:10:50.900895915 +0000 UTC m=+0.232315682 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:10:50 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:10:50 localhost nova_compute[281613]: 2025-11-23 10:10:50.958 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:51 localhost systemd[1]: tmp-crun.OkKqfX.mount: Deactivated successfully.
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: ERROR   10:10:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: ERROR   10:10:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: ERROR   10:10:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: ERROR   10:10:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: ERROR   10:10:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:10:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:10:52 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:10:52 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:10:52 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/d73f5489-d8e5-493d-9400-04efa839bc7c/f4b8d58d-5cc5-4368-a26c-234a4f905dc5],prefix=session evict} (starting...)
Nov 23 05:10:53 localhost nova_compute[281613]: 2025-11-23 10:10:53.986 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:10:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:10:55 localhost nova_compute[281613]: 2025-11-23 10:10:55.960 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:55 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 23 05:10:56 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/73463559-de38-4e65-91fe-e256e1993ef1/5697482a-5566-4614-b3e6-cbfdb66450d2],prefix=session evict} (starting...)
Nov 23 05:10:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Nov 23 05:10:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Nov 23 05:10:56 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 23 05:10:58 localhost nova_compute[281613]: 2025-11-23 10:10:58.989 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:10:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 23 05:11:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4178085071' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Nov 23 05:11:00 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 23 05:11:00 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4178085071' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Nov 23 05:11:00 localhost nova_compute[281613]: 2025-11-23 10:11:00.963 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:11:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:11:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:11:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:11:03 localhost systemd[1]: tmp-crun.38Aw8y.mount: Deactivated successfully.
Nov 23 05:11:03 localhost podman[323706]: 2025-11-23 10:11:03.207991027 +0000 UTC m=+0.110209890 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible)
Nov 23 05:11:03 localhost podman[323707]: 2025-11-23 10:11:03.248591302 +0000 UTC m=+0.147292911 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:11:03 localhost podman[323707]: 2025-11-23 10:11:03.286610886 +0000 UTC m=+0.185312425 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:11:03 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:11:03 localhost podman[323705]: 2025-11-23 10:11:03.299791502 +0000 UTC m=+0.201735718 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:11:03 localhost podman[323705]: 2025-11-23 10:11:03.333034187 +0000 UTC m=+0.234978413 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 23 05:11:03 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:11:03 localhost podman[323708]: 2025-11-23 10:11:03.352120292 +0000 UTC m=+0.245056966 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 23 05:11:03 localhost podman[323706]: 2025-11-23 10:11:03.370901407 +0000 UTC m=+0.273120300 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:11:03 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:11:03 localhost podman[323708]: 2025-11-23 10:11:03.414085652 +0000 UTC m=+0.307022276 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 23 05:11:03 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:11:03 localhost nova_compute[281613]: 2025-11-23 10:11:03.990 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:05 localhost nova_compute[281613]: 2025-11-23 10:11:05.965 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:09 localhost nova_compute[281613]: 2025-11-23 10:11:09.008 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:09.277 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:11:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:11:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:11:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:11:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:11:10 localhost nova_compute[281613]: 2025-11-23 10:11:10.968 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:11 localhost podman[240144]: time="2025-11-23T10:11:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:11:11 localhost podman[240144]: @ - - [23/Nov/2025:10:11:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:11:11 localhost podman[240144]: @ - - [23/Nov/2025:10:11:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1"
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.510240) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673510276, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1389, "num_deletes": 255, "total_data_size": 1910557, "memory_usage": 1935784, "flush_reason": "Manual Compaction"}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673518270, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1008116, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33067, "largest_seqno": 34450, "table_properties": {"data_size": 1003172, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14202, "raw_average_key_size": 21, "raw_value_size": 991949, "raw_average_value_size": 1528, "num_data_blocks": 98, "num_entries": 649, "num_filter_entries": 649, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892602, "oldest_key_time": 1763892602, "file_creation_time": 1763892673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8076 microseconds, and 3964 cpu microseconds.
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.518314) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1008116 bytes OK
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.518338) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.521893) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.521916) EVENT_LOG_v1 {"time_micros": 1763892673521910, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.521935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1903675, prev total WAL file size 1903999, number of live WAL files 2.
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.522667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323538' seq:72057594037927935, type:22 .. '6D6772737461740034353131' seq:0, type:0; will stop at (end)
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(984KB)], [54(17MB)]
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673522737, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 19632842, "oldest_snapshot_seqno": -1}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14510 keys, 17739894 bytes, temperature: kUnknown
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673591102, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17739894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17657942, "index_size": 44603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 389269, "raw_average_key_size": 26, "raw_value_size": 17412283, "raw_average_value_size": 1200, "num_data_blocks": 1652, "num_entries": 14510, "num_filter_entries": 14510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.591342) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17739894 bytes
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.592881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 286.9 rd, 259.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.8 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(37.1) write-amplify(17.6) OK, records in: 15017, records dropped: 507 output_compression: NoCompression
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.592900) EVENT_LOG_v1 {"time_micros": 1763892673592892, "job": 32, "event": "compaction_finished", "compaction_time_micros": 68441, "compaction_time_cpu_micros": 37150, "output_level": 6, "num_output_files": 1, "total_output_size": 17739894, "num_input_records": 15017, "num_output_records": 14510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673593088, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892673594734, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.522567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.594785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.594792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.594795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.594797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:13 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:11:13.594800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:11:14 localhost nova_compute[281613]: 2025-11-23 10:11:14.012 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:14 localhost nova_compute[281613]: 2025-11-23 10:11:14.078 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:14 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:14.080 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:11:14 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:14.081 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:11:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:15 localhost nova_compute[281613]: 2025-11-23 10:11:15.970 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:19 localhost nova_compute[281613]: 2025-11-23 10:11:19.048 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:20 localhost nova_compute[281613]: 2025-11-23 10:11:20.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:20 localhost nova_compute[281613]: 2025-11-23 10:11:20.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:11:20 localhost nova_compute[281613]: 2025-11-23 10:11:20.975 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:11:21 localhost podman[323791]: 2025-11-23 10:11:21.197834966 +0000 UTC m=+0.100789636 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 23 05:11:21 localhost podman[323791]: 2025-11-23 10:11:21.238218104 +0000 UTC m=+0.141172734 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm)
Nov 23 05:11:21 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:11:21 localhost podman[323793]: 2025-11-23 10:11:21.244074253 +0000 UTC m=+0.140870088 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:11:21 localhost podman[323792]: 2025-11-23 10:11:21.308731195 +0000 UTC m=+0.211584183 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 23 05:11:21 localhost podman[323793]: 2025-11-23 10:11:21.323606966 +0000 UTC m=+0.220402801 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:11:21 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:11:21 localhost podman[323792]: 2025-11-23 10:11:21.374731903 +0000 UTC m=+0.277584881 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 23 05:11:21 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:11:22 localhost nova_compute[281613]: 2025-11-23 10:11:22.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:22 localhost ovn_metadata_agent[159423]: 2025-11-23 10:11:22.083 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: ERROR   10:11:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: ERROR   10:11:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: ERROR   10:11:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: ERROR   10:11:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: ERROR   10:11:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:11:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.033 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:23 localhost nova_compute[281613]: 2025-11-23 10:11:23.034 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:23 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e268 e268: 6 total, 6 up, 6 in
Nov 23 05:11:24 localhost nova_compute[281613]: 2025-11-23 10:11:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:24 localhost nova_compute[281613]: 2025-11-23 10:11:24.095 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:25 localhost nova_compute[281613]: 2025-11-23 10:11:25.015 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:25 localhost nova_compute[281613]: 2025-11-23 10:11:25.977 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.017 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.045 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.046 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.046 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.047 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.047 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:11:27 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:11:27 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1143743016' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.496 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.715 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.717 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11519MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.717 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.718 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.813 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.814 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:11:27 localhost nova_compute[281613]: 2025-11-23 10:11:27.840 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:11:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:11:28 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1604146320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:11:28 localhost nova_compute[281613]: 2025-11-23 10:11:28.301 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:11:28 localhost nova_compute[281613]: 2025-11-23 10:11:28.307 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:11:28 localhost nova_compute[281613]: 2025-11-23 10:11:28.325 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:11:28 localhost nova_compute[281613]: 2025-11-23 10:11:28.328 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:11:28 localhost nova_compute[281613]: 2025-11-23 10:11:28.328 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:11:28 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e269 e269: 6 total, 6 up, 6 in
Nov 23 05:11:29 localhost nova_compute[281613]: 2025-11-23 10:11:29.098 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:30 localhost nova_compute[281613]: 2025-11-23 10:11:30.330 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:30 localhost nova_compute[281613]: 2025-11-23 10:11:30.981 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:31 localhost nova_compute[281613]: 2025-11-23 10:11:31.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:11:34 localhost nova_compute[281613]: 2025-11-23 10:11:34.100 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:34 localhost podman[323895]: 2025-11-23 10:11:34.193255768 +0000 UTC m=+0.092103083 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 23 05:11:34 localhost podman[323895]: 2025-11-23 10:11:34.199237039 +0000 UTC m=+0.098084314 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 23 05:11:34 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:11:34 localhost podman[323897]: 2025-11-23 10:11:34.244199511 +0000 UTC m=+0.138013040 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:11:34 localhost podman[323897]: 2025-11-23 10:11:34.251812666 +0000 UTC m=+0.145626155 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:11:34 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:11:34 localhost podman[323898]: 2025-11-23 10:11:34.300103167 +0000 UTC m=+0.191066520 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 23 05:11:34 localhost podman[323896]: 2025-11-23 10:11:34.381914272 +0000 UTC m=+0.280076189 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 05:11:34 localhost podman[323896]: 2025-11-23 10:11:34.391754707 +0000 UTC m=+0.289916674 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 05:11:34 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:11:34 localhost podman[323898]: 2025-11-23 10:11:34.404008077 +0000 UTC m=+0.294971420 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 23 05:11:34 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:11:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:35 localhost nova_compute[281613]: 2025-11-23 10:11:35.985 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:39 localhost nova_compute[281613]: 2025-11-23 10:11:39.124 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:40 localhost nova_compute[281613]: 2025-11-23 10:11:40.986 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:41 localhost podman[240144]: time="2025-11-23T10:11:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:11:41 localhost podman[240144]: @ - - [23/Nov/2025:10:11:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:11:41 localhost podman[240144]: @ - - [23/Nov/2025:10:11:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19234 "" "Go-http-client/1.1"
Nov 23 05:11:43 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:11:43.591 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:11:43Z, description=, device_id=2dc260f8-5d48-4126-8d42-5f3396592b40, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b809d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790b80dc0>], id=85f360d8-aee4-4e40-b6da-d58029471091, ip_allocation=immediate, mac_address=fa:16:3e:f7:94:5e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3885, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:11:43Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:11:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e270 e270: 6 total, 6 up, 6 in
Nov 23 05:11:43 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:11:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:11:43 localhost podman[323997]: 2025-11-23 10:11:43.837151839 +0000 UTC m=+0.063176053 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:11:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:11:44 localhost nova_compute[281613]: 2025-11-23 10:11:44.128 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:44 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:11:44.153 262721 INFO neutron.agent.dhcp.agent [None req-24115b9f-7875-4ef4-87f3-b121cf7596a7 - - - - - -] DHCP configuration for ports {'85f360d8-aee4-4e40-b6da-d58029471091'} is completed#033[00m
Nov 23 05:11:44 localhost nova_compute[281613]: 2025-11-23 10:11:44.307 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:45 localhost nova_compute[281613]: 2025-11-23 10:11:45.990 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:48 localhost nova_compute[281613]: 2025-11-23 10:11:48.454 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e271 e271: 6 total, 6 up, 6 in
Nov 23 05:11:49 localhost nova_compute[281613]: 2025-11-23 10:11:49.130 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:50 localhost nova_compute[281613]: 2025-11-23 10:11:50.990 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:11:52 localhost systemd[1]: tmp-crun.4nzwGe.mount: Deactivated successfully.
Nov 23 05:11:52 localhost podman[324018]: 2025-11-23 10:11:52.253065368 +0000 UTC m=+0.155112901 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter)
Nov 23 05:11:52 localhost podman[324019]: 2025-11-23 10:11:52.211701903 +0000 UTC m=+0.111265879 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: ERROR   10:11:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: ERROR   10:11:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: ERROR   10:11:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: ERROR   10:11:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: ERROR   10:11:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:11:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:11:52 localhost podman[324019]: 2025-11-23 10:11:52.29915175 +0000 UTC m=+0.198715776 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:11:52 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:11:52 localhost podman[324018]: 2025-11-23 10:11:52.369378112 +0000 UTC m=+0.271425625 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git)
Nov 23 05:11:52 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:11:52 localhost podman[324020]: 2025-11-23 10:11:52.369068014 +0000 UTC m=+0.263101901 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:11:52 localhost podman[324020]: 2025-11-23 10:11:52.451956608 +0000 UTC m=+0.345990485 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:11:52 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:11:53 localhost systemd[1]: tmp-crun.im1iHz.mount: Deactivated successfully.
Nov 23 05:11:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:11:53 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:11:54 localhost nova_compute[281613]: 2025-11-23 10:11:54.173 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:11:54 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:11:54 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:11:54 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:11:54 localhost podman[324181]: 2025-11-23 10:11:54.838429711 +0000 UTC m=+0.065973740 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 23 05:11:54 localhost nova_compute[281613]: 2025-11-23 10:11:54.880 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:11:55 localhost nova_compute[281613]: 2025-11-23 10:11:55.998 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:11:56.068 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:11:55Z, description=, device_id=8d1ecd33-a29e-452b-8902-8028e4b524ad, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a99610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a99eb0>], id=a3ad99e3-5166-438f-a628-4036f1068abb, ip_allocation=immediate, mac_address=fa:16:3e:0a:0d:be, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3914, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:11:55Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:11:56 localhost podman[324220]: 2025-11-23 10:11:56.324464497 +0000 UTC m=+0.061063976 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:11:56 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:11:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:11:56 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:11:56 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:11:56.562 262721 INFO neutron.agent.dhcp.agent [None req-101cb7ad-2102-4b20-afb5-a9e77ae56eab - - - - - -] DHCP configuration for ports {'a3ad99e3-5166-438f-a628-4036f1068abb'} is completed#033[00m
Nov 23 05:11:57 localhost nova_compute[281613]: 2025-11-23 10:11:57.213 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e272 e272: 6 total, 6 up, 6 in
Nov 23 05:11:59 localhost nova_compute[281613]: 2025-11-23 10:11:59.176 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:59 localhost nova_compute[281613]: 2025-11-23 10:11:59.280 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:11:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:01 localhost nova_compute[281613]: 2025-11-23 10:12:00.999 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:12:01 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 89K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 7983 syncs, 2.94 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 33.83 MB, 0.06 MB/s#012Interval WAL: 13K writes, 5576 syncs, 2.49 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:12:03 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 e273: 6 total, 6 up, 6 in
Nov 23 05:12:03 localhost nova_compute[281613]: 2025-11-23 10:12:03.945 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:03 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:12:03 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:12:03 localhost podman[324257]: 2025-11-23 10:12:03.963981605 +0000 UTC m=+0.064753086 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:12:03 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:12:04 localhost nova_compute[281613]: 2025-11-23 10:12:04.177 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:12:05 localhost podman[324281]: 2025-11-23 10:12:05.187483908 +0000 UTC m=+0.084018416 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 23 05:12:05 localhost podman[324281]: 2025-11-23 10:12:05.199902452 +0000 UTC m=+0.096436960 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 05:12:05 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:12:05 localhost podman[324283]: 2025-11-23 10:12:05.254948366 +0000 UTC m=+0.143615951 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:12:05 localhost podman[324282]: 2025-11-23 10:12:05.343273545 +0000 UTC m=+0.236409551 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:12:05 localhost podman[324282]: 2025-11-23 10:12:05.357011746 +0000 UTC m=+0.250147732 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:12:05 localhost podman[324283]: 2025-11-23 10:12:05.370267873 +0000 UTC m=+0.258935448 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 23 05:12:05 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:12:05 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:12:05 localhost podman[324280]: 2025-11-23 10:12:05.445806899 +0000 UTC m=+0.342598793 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:12:05 localhost podman[324280]: 2025-11-23 10:12:05.451174434 +0000 UTC m=+0.347966318 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:12:05 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:12:06 localhost nova_compute[281613]: 2025-11-23 10:12:06.002 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:12:06 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 87K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 8146 syncs, 2.84 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 15K writes, 54K keys, 15K commit groups, 1.0 writes per commit group, ingest: 48.02 MB, 0.08 MB/s#012Interval WAL: 15K writes, 6074 syncs, 2.47 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.562841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728562907, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1082, "num_deletes": 259, "total_data_size": 2010227, "memory_usage": 2029936, "flush_reason": "Manual Compaction"}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728571480, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1323910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34455, "largest_seqno": 35532, "table_properties": {"data_size": 1319327, "index_size": 2118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11039, "raw_average_key_size": 20, "raw_value_size": 1309602, "raw_average_value_size": 2385, "num_data_blocks": 93, "num_entries": 549, "num_filter_entries": 549, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892673, "oldest_key_time": 1763892673, "file_creation_time": 1763892728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 8732 microseconds, and 4115 cpu microseconds.
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.571581) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1323910 bytes OK
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.571603) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.574618) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.574669) EVENT_LOG_v1 {"time_micros": 1763892728574658, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.574695) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2004734, prev total WAL file size 2005058, number of live WAL files 2.
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.575448) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353239' seq:72057594037927935, type:22 .. '6C6F676D0034373831' seq:0, type:0; will stop at (end)
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1292KB)], [57(16MB)]
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728575506, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 19063804, "oldest_snapshot_seqno": -1}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14519 keys, 18927093 bytes, temperature: kUnknown
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728662392, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 18927093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18843424, "index_size": 46256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 390709, "raw_average_key_size": 26, "raw_value_size": 18595912, "raw_average_value_size": 1280, "num_data_blocks": 1718, "num_entries": 14519, "num_filter_entries": 14519, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.662959) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 18927093 bytes
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.665576) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.8 rd, 217.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.9 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(28.7) write-amplify(14.3) OK, records in: 15059, records dropped: 540 output_compression: NoCompression
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.665606) EVENT_LOG_v1 {"time_micros": 1763892728665593, "job": 34, "event": "compaction_finished", "compaction_time_micros": 87129, "compaction_time_cpu_micros": 53765, "output_level": 6, "num_output_files": 1, "total_output_size": 18927093, "num_input_records": 15059, "num_output_records": 14519, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728665967, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892728668452, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.575349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.668576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.668584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.668587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.668590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:08 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:08.668593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:12:09.021 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:12:08Z, description=, device_id=53863d0f-d6fc-41eb-8fc3-62cf12ff3867, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a79be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790a79dc0>], id=1088405a-d39d-46ce-884e-be78d861434e, ip_allocation=immediate, mac_address=fa:16:3e:03:9d:96, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3943, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:12:08Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:12:09 localhost nova_compute[281613]: 2025-11-23 10:12:09.180 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:09 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:12:09 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:12:09 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:12:09 localhost podman[324378]: 2025-11-23 10:12:09.218048909 +0000 UTC m=+0.056601973 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 23 05:12:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:09.277 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:12:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:12:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:12:09 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:12:09.489 262721 INFO neutron.agent.dhcp.agent [None req-0ceee8a7-52a9-4d94-97a6-caeb4991eaa8 - - - - - -] DHCP configuration for ports {'1088405a-d39d-46ce-884e-be78d861434e'} is completed#033[00m
Nov 23 05:12:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:11 localhost nova_compute[281613]: 2025-11-23 10:12:11.004 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:11 localhost podman[240144]: time="2025-11-23T10:12:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:12:11 localhost podman[240144]: @ - - [23/Nov/2025:10:12:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:12:11 localhost podman[240144]: @ - - [23/Nov/2025:10:12:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19237 "" "Go-http-client/1.1"
Nov 23 05:12:11 localhost nova_compute[281613]: 2025-11-23 10:12:11.693 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:14 localhost nova_compute[281613]: 2025-11-23 10:12:14.183 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:14 localhost nova_compute[281613]: 2025-11-23 10:12:14.546 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:14 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:14.545 159429 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:9b:ed', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '8a:46:67:49:71:9c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 23 05:12:14 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:14.547 159429 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 23 05:12:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:16 localhost nova_compute[281613]: 2025-11-23 10:12:16.008 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:17 localhost nova_compute[281613]: 2025-11-23 10:12:17.334 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:19 localhost nova_compute[281613]: 2025-11-23 10:12:19.215 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:20 localhost nova_compute[281613]: 2025-11-23 10:12:20.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:20 localhost nova_compute[281613]: 2025-11-23 10:12:20.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:12:21 localhost nova_compute[281613]: 2025-11-23 10:12:21.011 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: ERROR   10:12:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: ERROR   10:12:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: ERROR   10:12:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: ERROR   10:12:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: ERROR   10:12:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:12:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:12:23 localhost nova_compute[281613]: 2025-11-23 10:12:23.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:23 localhost nova_compute[281613]: 2025-11-23 10:12:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:12:23 localhost nova_compute[281613]: 2025-11-23 10:12:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:12:23 localhost nova_compute[281613]: 2025-11-23 10:12:23.035 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:12:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:12:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:12:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:12:23 localhost podman[324399]: 2025-11-23 10:12:23.1853839 +0000 UTC m=+0.091190841 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=)
Nov 23 05:12:23 localhost podman[324399]: 2025-11-23 10:12:23.202274265 +0000 UTC m=+0.108081206 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, version=9.6)
Nov 23 05:12:23 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:12:23 localhost podman[324400]: 2025-11-23 10:12:23.284979627 +0000 UTC m=+0.186861123 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:12:23 localhost podman[324400]: 2025-11-23 10:12:23.300996617 +0000 UTC m=+0.202878113 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 23 05:12:23 localhost systemd[1]: tmp-crun.lu0nJd.mount: Deactivated successfully.
Nov 23 05:12:23 localhost podman[324401]: 2025-11-23 10:12:23.335305219 +0000 UTC m=+0.234182555 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:12:23 localhost podman[324401]: 2025-11-23 10:12:23.347987401 +0000 UTC m=+0.246864797 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:12:23 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:12:23 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:12:23 localhost ovn_metadata_agent[159423]: 2025-11-23 10:12:23.548 159429 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9b5ed7a7-8af8-41a0-a5ff-546625cecbf9, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 23 05:12:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:12:23.558 262721 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-23T10:12:23Z, description=, device_id=b16795fb-9d48-4b58-8265-f41a923b13e6, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cbac70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7fa790cbaa30>], id=694d2e34-aeba-4f44-9be6-4ae8bed31d17, ip_allocation=immediate, mac_address=fa:16:3e:a1:8a:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-23T08:24:56Z, description=, dns_domain=, id=4888f017-3f3f-45ef-b058-53b634233093, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=1915d3e5d4254231a0517e2dcf35848f, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5e38bf46-3a55-46da-9318-fc6a28fa61ca'], tags=[], tenant_id=1915d3e5d4254231a0517e2dcf35848f, updated_at=2025-11-23T08:25:02Z, vlan_transparent=None, network_id=4888f017-3f3f-45ef-b058-53b634233093, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3956, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-23T10:12:23Z on network 4888f017-3f3f-45ef-b058-53b634233093#033[00m
Nov 23 05:12:23 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 3 addresses
Nov 23 05:12:23 localhost podman[324479]: 2025-11-23 10:12:23.787214475 +0000 UTC m=+0.065470101 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:12:23 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:12:23 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:12:23 localhost neutron_dhcp_agent[262717]: 2025-11-23 10:12:23.996 262721 INFO neutron.agent.dhcp.agent [None req-3c48e855-6abd-46b5-8942-e9974c4fdb19 - - - - - -] DHCP configuration for ports {'694d2e34-aeba-4f44-9be6-4ae8bed31d17'} is completed#033[00m
Nov 23 05:12:24 localhost nova_compute[281613]: 2025-11-23 10:12:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:24 localhost nova_compute[281613]: 2025-11-23 10:12:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:24 localhost nova_compute[281613]: 2025-11-23 10:12:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:24 localhost nova_compute[281613]: 2025-11-23 10:12:24.218 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:25 localhost nova_compute[281613]: 2025-11-23 10:12:25.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:25 localhost nova_compute[281613]: 2025-11-23 10:12:25.756 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:26 localhost nova_compute[281613]: 2025-11-23 10:12:26.015 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:26 localhost nova_compute[281613]: 2025-11-23 10:12:26.017 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.049 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.049 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.050 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.050 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.050 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.263 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:12:29 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2515636495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.497 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.723 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.725 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11510MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.726 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.726 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.807 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.807 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:12:29 localhost nova_compute[281613]: 2025-11-23 10:12:29.825 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:12:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:12:30 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2885826626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:12:30 localhost nova_compute[281613]: 2025-11-23 10:12:30.273 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:12:30 localhost nova_compute[281613]: 2025-11-23 10:12:30.280 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:12:30 localhost nova_compute[281613]: 2025-11-23 10:12:30.298 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:12:30 localhost nova_compute[281613]: 2025-11-23 10:12:30.300 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:12:30 localhost nova_compute[281613]: 2025-11-23 10:12:30.301 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:12:31 localhost nova_compute[281613]: 2025-11-23 10:12:31.020 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:31 localhost nova_compute[281613]: 2025-11-23 10:12:31.301 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:12:32 localhost nova_compute[281613]: 2025-11-23 10:12:32.493 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:34 localhost nova_compute[281613]: 2025-11-23 10:12:34.266 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:36 localhost nova_compute[281613]: 2025-11-23 10:12:36.023 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:12:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:12:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:12:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:12:36 localhost podman[324543]: 2025-11-23 10:12:36.184274816 +0000 UTC m=+0.093515365 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 05:12:36 localhost podman[324543]: 2025-11-23 10:12:36.217976002 +0000 UTC m=+0.127216551 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 23 05:12:36 localhost podman[324544]: 2025-11-23 10:12:36.227342563 +0000 UTC m=+0.129329306 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 23 05:12:36 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:12:36 localhost podman[324544]: 2025-11-23 10:12:36.23911163 +0000 UTC m=+0.141098293 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 05:12:36 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:12:36 localhost podman[324550]: 2025-11-23 10:12:36.296745369 +0000 UTC m=+0.192810173 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 23 05:12:36 localhost podman[324550]: 2025-11-23 10:12:36.305790182 +0000 UTC m=+0.201854946 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:12:36 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:12:36 localhost podman[324551]: 2025-11-23 10:12:36.390895999 +0000 UTC m=+0.285695650 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:12:36 localhost podman[324551]: 2025-11-23 10:12:36.455153966 +0000 UTC m=+0.349953617 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 05:12:36 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:12:37 localhost nova_compute[281613]: 2025-11-23 10:12:37.450 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:37 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 2 addresses
Nov 23 05:12:37 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:12:37 localhost podman[324640]: 2025-11-23 10:12:37.457951365 +0000 UTC m=+0.063253330 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:12:37 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:12:39 localhost nova_compute[281613]: 2025-11-23 10:12:39.303 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:41 localhost nova_compute[281613]: 2025-11-23 10:12:41.025 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:41 localhost podman[240144]: time="2025-11-23T10:12:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:12:41 localhost podman[240144]: @ - - [23/Nov/2025:10:12:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:12:41 localhost podman[240144]: @ - - [23/Nov/2025:10:12:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19232 "" "Go-http-client/1.1"
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.764413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761764467, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 618, "num_deletes": 251, "total_data_size": 504701, "memory_usage": 515992, "flush_reason": "Manual Compaction"}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761769264, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 328382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35537, "largest_seqno": 36150, "table_properties": {"data_size": 325483, "index_size": 882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7217, "raw_average_key_size": 19, "raw_value_size": 319655, "raw_average_value_size": 875, "num_data_blocks": 39, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763892728, "oldest_key_time": 1763892728, "file_creation_time": 1763892761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4890 microseconds, and 1764 cpu microseconds.
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.769305) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 328382 bytes OK
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.769324) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.770781) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.770801) EVENT_LOG_v1 {"time_micros": 1763892761770795, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.770819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 501228, prev total WAL file size 501228, number of live WAL files 2.
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.773308) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end)
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(320KB)], [60(18MB)]
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761773361, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19255475, "oldest_snapshot_seqno": -1}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14369 keys, 17821096 bytes, temperature: kUnknown
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761858829, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 17821096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17740165, "index_size": 43902, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 388058, "raw_average_key_size": 27, "raw_value_size": 17496942, "raw_average_value_size": 1217, "num_data_blocks": 1615, "num_entries": 14369, "num_filter_entries": 14369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763891619, "oldest_key_time": 0, "file_creation_time": 1763892761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "30f02bec-0087-464e-96d9-108a203904da", "db_session_id": "QS3FPCEL51SUB5WPWX0A", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.859175) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 17821096 bytes
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.861179) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.0 rd, 208.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.1 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(112.9) write-amplify(54.3) OK, records in: 14884, records dropped: 515 output_compression: NoCompression
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.861207) EVENT_LOG_v1 {"time_micros": 1763892761861195, "job": 36, "event": "compaction_finished", "compaction_time_micros": 85568, "compaction_time_cpu_micros": 49918, "output_level": 6, "num_output_files": 1, "total_output_size": 17821096, "num_input_records": 14884, "num_output_records": 14369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761861410, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005532586/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763892761864345, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.773224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.864632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.864638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.864641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.864644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:41 localhost ceph-mon[302802]: rocksdb: (Original Log Time 2025/11/23-10:12:41.864646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 23 05:12:43 localhost nova_compute[281613]: 2025-11-23 10:12:43.251 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:43 localhost dnsmasq[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/addn_hosts - 1 addresses
Nov 23 05:12:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/host
Nov 23 05:12:43 localhost dnsmasq-dhcp[310221]: read /var/lib/neutron/dhcp/4888f017-3f3f-45ef-b058-53b634233093/opts
Nov 23 05:12:43 localhost podman[324675]: 2025-11-23 10:12:43.254586442 +0000 UTC m=+0.079761515 container kill f96b6263df2178d110555d16895bf5858b14a2a3c079141b65739c1f3cbba64d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4888f017-3f3f-45ef-b058-53b634233093, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:12:44 localhost nova_compute[281613]: 2025-11-23 10:12:44.340 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:46 localhost nova_compute[281613]: 2025-11-23 10:12:46.030 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:49 localhost nova_compute[281613]: 2025-11-23 10:12:49.345 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:51 localhost nova_compute[281613]: 2025-11-23 10:12:51.034 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: ERROR   10:12:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: ERROR   10:12:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: ERROR   10:12:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: ERROR   10:12:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: ERROR   10:12:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:12:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:12:53 localhost podman[324715]: 2025-11-23 10:12:53.504597381 +0000 UTC m=+0.100549282 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 23 05:12:53 localhost podman[324715]: 2025-11-23 10:12:53.534048793 +0000 UTC m=+0.130000674 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350)
Nov 23 05:12:53 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:12:53 localhost systemd[1]: tmp-crun.aIgcWe.mount: Deactivated successfully.
Nov 23 05:12:53 localhost podman[324716]: 2025-11-23 10:12:53.55962285 +0000 UTC m=+0.153574638 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:12:53 localhost podman[324716]: 2025-11-23 10:12:53.567886862 +0000 UTC m=+0.161838600 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:12:53 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:12:53 localhost podman[324757]: 2025-11-23 10:12:53.639786555 +0000 UTC m=+0.143624642 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 23 05:12:53 localhost podman[324757]: 2025-11-23 10:12:53.650326628 +0000 UTC m=+0.154164755 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 23 05:12:53 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:12:54 localhost nova_compute[281613]: 2025-11-23 10:12:54.390 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:12:54 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:12:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:12:56 localhost nova_compute[281613]: 2025-11-23 10:12:56.035 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:12:59 localhost nova_compute[281613]: 2025-11-23 10:12:59.394 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:12:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:01 localhost nova_compute[281613]: 2025-11-23 10:13:01.038 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:04 localhost nova_compute[281613]: 2025-11-23 10:13:04.448 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:06 localhost sshd[324842]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:06 localhost nova_compute[281613]: 2025-11-23 10:13:06.047 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:13:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:13:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:13:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:13:07 localhost podman[324844]: 2025-11-23 10:13:07.191803426 +0000 UTC m=+0.091572091 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 23 05:13:07 localhost podman[324845]: 2025-11-23 10:13:07.246584319 +0000 UTC m=+0.142893972 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3)
Nov 23 05:13:07 localhost systemd[1]: tmp-crun.hAEvcx.mount: Deactivated successfully.
Nov 23 05:13:07 localhost podman[324847]: 2025-11-23 10:13:07.304216887 +0000 UTC m=+0.192730000 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 23 05:13:07 localhost podman[324844]: 2025-11-23 10:13:07.327108913 +0000 UTC m=+0.226877598 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 23 05:13:07 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:13:07 localhost podman[324845]: 2025-11-23 10:13:07.383697624 +0000 UTC m=+0.280007327 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 23 05:13:07 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:13:07 localhost podman[324847]: 2025-11-23 10:13:07.399345824 +0000 UTC m=+0.287858867 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:13:07 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:13:07 localhost podman[324846]: 2025-11-23 10:13:07.403793693 +0000 UTC m=+0.296283823 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 23 05:13:07 localhost podman[324846]: 2025-11-23 10:13:07.485761436 +0000 UTC m=+0.378251586 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:13:07 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:13:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:13:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:13:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:13:09.278 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:13:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:13:09.279 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:13:09 localhost nova_compute[281613]: 2025-11-23 10:13:09.451 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:10 localhost ceilometer_agent_compute[237484]: 2025-11-23 10:13:10.201 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Nov 23 05:13:11 localhost nova_compute[281613]: 2025-11-23 10:13:11.050 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:11 localhost podman[240144]: time="2025-11-23T10:13:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:13:11 localhost podman[240144]: @ - - [23/Nov/2025:10:13:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:13:11 localhost podman[240144]: @ - - [23/Nov/2025:10:13:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19232 "" "Go-http-client/1.1"
Nov 23 05:13:14 localhost nova_compute[281613]: 2025-11-23 10:13:14.483 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:14 localhost ovn_controller[153786]: 2025-11-23T10:13:14Z|00213|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 23 05:13:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:15 localhost sshd[324929]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:15 localhost systemd-logind[761]: New session 72 of user zuul.
Nov 23 05:13:15 localhost systemd[1]: Started Session 72 of User zuul.
Nov 23 05:13:15 localhost python3[324951]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-fc5a-8bfb-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 23 05:13:16 localhost nova_compute[281613]: 2025-11-23 10:13:16.053 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:18 localhost sshd[324954]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:19 localhost nova_compute[281613]: 2025-11-23 10:13:19.487 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:20 localhost systemd[1]: session-72.scope: Deactivated successfully.
Nov 23 05:13:20 localhost systemd-logind[761]: Session 72 logged out. Waiting for processes to exit.
Nov 23 05:13:20 localhost systemd-logind[761]: Removed session 72.
Nov 23 05:13:21 localhost nova_compute[281613]: 2025-11-23 10:13:21.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:21 localhost nova_compute[281613]: 2025-11-23 10:13:21.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:13:21 localhost nova_compute[281613]: 2025-11-23 10:13:21.054 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: ERROR   10:13:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: ERROR   10:13:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: ERROR   10:13:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: ERROR   10:13:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: ERROR   10:13:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:13:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:13:23 localhost nova_compute[281613]: 2025-11-23 10:13:23.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:23 localhost nova_compute[281613]: 2025-11-23 10:13:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:13:23 localhost nova_compute[281613]: 2025-11-23 10:13:23.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:13:23 localhost nova_compute[281613]: 2025-11-23 10:13:23.036 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:13:24 localhost nova_compute[281613]: 2025-11-23 10:13:24.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:24 localhost nova_compute[281613]: 2025-11-23 10:13:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:24 localhost nova_compute[281613]: 2025-11-23 10:13:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:13:24 localhost podman[324956]: 2025-11-23 10:13:24.196598642 +0000 UTC m=+0.098835797 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Nov 23 05:13:24 localhost podman[324958]: 2025-11-23 10:13:24.233449943 +0000 UTC m=+0.129875882 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:13:24 localhost podman[324958]: 2025-11-23 10:13:24.243933275 +0000 UTC m=+0.140359184 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:13:24 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:13:24 localhost podman[324957]: 2025-11-23 10:13:24.292454109 +0000 UTC m=+0.194035956 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 23 05:13:24 localhost podman[324957]: 2025-11-23 10:13:24.303762793 +0000 UTC m=+0.205344640 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 23 05:13:24 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:13:24 localhost podman[324956]: 2025-11-23 10:13:24.315315773 +0000 UTC m=+0.217552958 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Nov 23 05:13:24 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:13:24 localhost nova_compute[281613]: 2025-11-23 10:13:24.526 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:26 localhost nova_compute[281613]: 2025-11-23 10:13:26.057 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:27 localhost nova_compute[281613]: 2025-11-23 10:13:27.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:28 localhost nova_compute[281613]: 2025-11-23 10:13:28.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.040 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.040 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.041 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.041 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.041 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:13:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:13:29 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/159070162' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.498 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.531 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.701 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.703 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11505MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.703 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:13:29 localhost nova_compute[281613]: 2025-11-23 10:13:29.704 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:13:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.091 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.092 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.190 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing inventories for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.250 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating ProviderTree inventory for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.251 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Updating inventory in ProviderTree for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.268 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing aggregate associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.290 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Refreshing trait associations for resource provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.307 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:13:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:13:30 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3920762680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.740 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.747 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.794 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.797 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:13:30 localhost nova_compute[281613]: 2025-11-23 10:13:30.797 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:13:31 localhost nova_compute[281613]: 2025-11-23 10:13:31.081 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:32 localhost nova_compute[281613]: 2025-11-23 10:13:32.798 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:33 localhost nova_compute[281613]: 2025-11-23 10:13:33.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:33 localhost nova_compute[281613]: 2025-11-23 10:13:33.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 23 05:13:33 localhost nova_compute[281613]: 2025-11-23 10:13:33.037 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 23 05:13:34 localhost nova_compute[281613]: 2025-11-23 10:13:34.579 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:36 localhost nova_compute[281613]: 2025-11-23 10:13:36.034 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:36 localhost nova_compute[281613]: 2025-11-23 10:13:36.122 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:13:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:13:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:13:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:13:38 localhost podman[325064]: 2025-11-23 10:13:38.185322501 +0000 UTC m=+0.082676633 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:13:38 localhost podman[325064]: 2025-11-23 10:13:38.192664299 +0000 UTC m=+0.090018521 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:13:38 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:13:38 localhost podman[325062]: 2025-11-23 10:13:38.237685138 +0000 UTC m=+0.142473439 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 23 05:13:38 localhost podman[325062]: 2025-11-23 10:13:38.241716516 +0000 UTC m=+0.146504807 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:13:38 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:13:38 localhost podman[325070]: 2025-11-23 10:13:38.289218123 +0000 UTC m=+0.186171804 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 23 05:13:38 localhost podman[325063]: 2025-11-23 10:13:38.357319533 +0000 UTC m=+0.259335280 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 23 05:13:38 localhost podman[325070]: 2025-11-23 10:13:38.35829515 +0000 UTC m=+0.255248881 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 23 05:13:38 localhost podman[325063]: 2025-11-23 10:13:38.398015608 +0000 UTC m=+0.300031375 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:13:38 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:13:38 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:13:39 localhost nova_compute[281613]: 2025-11-23 10:13:39.581 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:39 localhost sshd[325148]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:13:39 localhost ceph-mon[302802]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4691 writes, 36K keys, 4691 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 4691 writes, 4691 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2453 writes, 13K keys, 2453 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 2453 writes, 2453 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    174.5      0.23              0.11        18    0.013       0      0       0.0       0.0#012  L6      1/0   17.00 MB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   7.2    227.8    211.2      1.34              0.76        17    0.079    229K   8773       0.0       0.0#012 Sum      1/0   17.00 MB   0.0      0.3     0.0      0.3       0.3      0.1       0.0   8.2    195.1    205.9      1.57              0.87        35    0.045    229K   8773       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  12.2    199.4    203.1      0.74              0.41        16    0.046    115K   4297       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   0.0    227.8    211.2      1.34              0.76        17    0.079    229K   8773       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    176.4      0.22              0.11        17    0.013       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.31 GB write, 0.27 MB/s write, 0.30 GB read, 0.25 MB/s read, 1.6 seconds#012Interval compaction: 0.15 GB write, 0.25 MB/s write, 0.14 GB read, 0.24 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5569c8721350#2 capacity: 304.00 MB usage: 22.99 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000221 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1369,21.53 MB,7.08284%) FilterBlock(35,662.80 KB,0.212915%) IndexBlock(35,832.39 KB,0.267395%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 23 05:13:39 localhost systemd-logind[761]: New session 73 of user zuul.
Nov 23 05:13:39 localhost systemd[1]: Started Session 73 of User zuul.
Nov 23 05:13:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:40 localhost systemd[1]: session-73.scope: Deactivated successfully.
Nov 23 05:13:40 localhost systemd-logind[761]: Session 73 logged out. Waiting for processes to exit.
Nov 23 05:13:40 localhost systemd-logind[761]: Removed session 73.
Nov 23 05:13:41 localhost sshd[325170]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:41 localhost nova_compute[281613]: 2025-11-23 10:13:41.126 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:41 localhost systemd-logind[761]: New session 74 of user zuul.
Nov 23 05:13:41 localhost systemd[1]: Started Session 74 of User zuul.
Nov 23 05:13:41 localhost podman[240144]: time="2025-11-23T10:13:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:13:41 localhost podman[240144]: @ - - [23/Nov/2025:10:13:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:13:41 localhost systemd[1]: session-74.scope: Deactivated successfully.
Nov 23 05:13:41 localhost systemd-logind[761]: Session 74 logged out. Waiting for processes to exit.
Nov 23 05:13:41 localhost systemd-logind[761]: Removed session 74.
Nov 23 05:13:41 localhost podman[240144]: @ - - [23/Nov/2025:10:13:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19235 "" "Go-http-client/1.1"
Nov 23 05:13:41 localhost sshd[325192]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:41 localhost systemd-logind[761]: New session 75 of user zuul.
Nov 23 05:13:41 localhost systemd[1]: Started Session 75 of User zuul.
Nov 23 05:13:41 localhost systemd-logind[761]: Session 75 logged out. Waiting for processes to exit.
Nov 23 05:13:41 localhost systemd[1]: session-75.scope: Deactivated successfully.
Nov 23 05:13:41 localhost systemd-logind[761]: Removed session 75.
Nov 23 05:13:42 localhost sshd[325214]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:42 localhost systemd-logind[761]: New session 76 of user zuul.
Nov 23 05:13:42 localhost systemd[1]: Started Session 76 of User zuul.
Nov 23 05:13:42 localhost systemd[1]: session-76.scope: Deactivated successfully.
Nov 23 05:13:42 localhost systemd-logind[761]: Session 76 logged out. Waiting for processes to exit.
Nov 23 05:13:42 localhost systemd-logind[761]: Removed session 76.
Nov 23 05:13:42 localhost sshd[325236]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:42 localhost systemd-logind[761]: New session 77 of user zuul.
Nov 23 05:13:43 localhost systemd[1]: Started Session 77 of User zuul.
Nov 23 05:13:43 localhost systemd[1]: session-77.scope: Deactivated successfully.
Nov 23 05:13:43 localhost systemd-logind[761]: Session 77 logged out. Waiting for processes to exit.
Nov 23 05:13:43 localhost systemd-logind[761]: Removed session 77.
Nov 23 05:13:43 localhost sshd[325258]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:43 localhost systemd-logind[761]: New session 78 of user zuul.
Nov 23 05:13:43 localhost systemd[1]: Started Session 78 of User zuul.
Nov 23 05:13:43 localhost systemd[1]: session-78.scope: Deactivated successfully.
Nov 23 05:13:43 localhost systemd-logind[761]: Session 78 logged out. Waiting for processes to exit.
Nov 23 05:13:43 localhost systemd-logind[761]: Removed session 78.
Nov 23 05:13:44 localhost sshd[325280]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:44 localhost systemd-logind[761]: New session 79 of user zuul.
Nov 23 05:13:44 localhost systemd[1]: Started Session 79 of User zuul.
Nov 23 05:13:44 localhost systemd[1]: session-79.scope: Deactivated successfully.
Nov 23 05:13:44 localhost systemd-logind[761]: Session 79 logged out. Waiting for processes to exit.
Nov 23 05:13:44 localhost systemd-logind[761]: Removed session 79.
Nov 23 05:13:44 localhost sshd[325302]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:44 localhost nova_compute[281613]: 2025-11-23 10:13:44.606 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:44 localhost systemd-logind[761]: New session 80 of user zuul.
Nov 23 05:13:44 localhost systemd[1]: Started Session 80 of User zuul.
Nov 23 05:13:44 localhost systemd[1]: session-80.scope: Deactivated successfully.
Nov 23 05:13:44 localhost systemd-logind[761]: Session 80 logged out. Waiting for processes to exit.
Nov 23 05:13:44 localhost systemd-logind[761]: Removed session 80.
Nov 23 05:13:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:45 localhost sshd[325324]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:13:45 localhost systemd-logind[761]: New session 81 of user zuul.
Nov 23 05:13:45 localhost systemd[1]: Started Session 81 of User zuul.
Nov 23 05:13:45 localhost systemd[1]: session-81.scope: Deactivated successfully.
Nov 23 05:13:45 localhost systemd-logind[761]: Session 81 logged out. Waiting for processes to exit.
Nov 23 05:13:45 localhost systemd-logind[761]: Removed session 81.
Nov 23 05:13:46 localhost nova_compute[281613]: 2025-11-23 10:13:46.130 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:49 localhost nova_compute[281613]: 2025-11-23 10:13:49.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:49 localhost nova_compute[281613]: 2025-11-23 10:13:49.610 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:50 localhost nova_compute[281613]: 2025-11-23 10:13:50.815 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:51 localhost nova_compute[281613]: 2025-11-23 10:13:51.132 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: ERROR   10:13:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: ERROR   10:13:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: ERROR   10:13:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: ERROR   10:13:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: ERROR   10:13:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:13:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:13:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:13:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:13:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:13:54 localhost nova_compute[281613]: 2025-11-23 10:13:54.655 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:54 localhost systemd[1]: tmp-crun.PEDoVD.mount: Deactivated successfully.
Nov 23 05:13:54 localhost podman[325365]: 2025-11-23 10:13:54.688851135 +0000 UTC m=+0.098856658 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Nov 23 05:13:54 localhost podman[325364]: 2025-11-23 10:13:54.711069802 +0000 UTC m=+0.122698119 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm)
Nov 23 05:13:54 localhost podman[325364]: 2025-11-23 10:13:54.72813506 +0000 UTC m=+0.139763397 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Nov 23 05:13:54 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:13:54 localhost podman[325365]: 2025-11-23 10:13:54.786963641 +0000 UTC m=+0.196969194 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:13:54 localhost podman[325366]: 2025-11-23 10:13:54.793695233 +0000 UTC m=+0.198908107 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:13:54 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:13:54 localhost podman[325366]: 2025-11-23 10:13:54.806920547 +0000 UTC m=+0.212133462 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:13:54 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:13:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:13:55 localhost systemd[1]: tmp-crun.RkSlF7.mount: Deactivated successfully.
Nov 23 05:13:56 localhost nova_compute[281613]: 2025-11-23 10:13:56.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:13:56 localhost nova_compute[281613]: 2025-11-23 10:13:56.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 23 05:13:56 localhost nova_compute[281613]: 2025-11-23 10:13:56.133 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:56 localhost podman[325554]: 
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.191682954 +0000 UTC m=+0.077210507 container create 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, GIT_CLEAN=True)
Nov 23 05:13:56 localhost systemd[1]: Started libpod-conmon-54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75.scope.
Nov 23 05:13:56 localhost systemd[1]: tmp-crun.BlYLy5.mount: Deactivated successfully.
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.158486352 +0000 UTC m=+0.044013955 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 05:13:56 localhost systemd[1]: Started libcrun container.
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.282944316 +0000 UTC m=+0.168471869 container init 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.29646305 +0000 UTC m=+0.181990613 container start 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, RELEASE=main, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True)
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.296814569 +0000 UTC m=+0.182342322 container attach 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, ceph=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph)
Nov 23 05:13:56 localhost brave_ardinghelli[325569]: 167 167
Nov 23 05:13:56 localhost systemd[1]: libpod-54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75.scope: Deactivated successfully.
Nov 23 05:13:56 localhost podman[325554]: 2025-11-23 10:13:56.302385518 +0000 UTC m=+0.187913121 container died 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main)
Nov 23 05:13:56 localhost podman[325574]: 2025-11-23 10:13:56.421578382 +0000 UTC m=+0.105586019 container remove 54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_ardinghelli, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, version=7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True)
Nov 23 05:13:56 localhost systemd[1]: libpod-conmon-54df8e85f24f4569485c16140f8b3e19332524b114d2ffadcd1da9335fd06f75.scope: Deactivated successfully.
Nov 23 05:13:56 localhost podman[325595]: 
Nov 23 05:13:56 localhost podman[325595]: 2025-11-23 10:13:56.664283794 +0000 UTC m=+0.080438422 container create 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, vcs-type=git, name=rhceph, ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7)
Nov 23 05:13:56 localhost systemd[1]: var-lib-containers-storage-overlay-c177e62d8484997dd5f4a8c84734b76df77bc2a5c30ab614bab36991d359eb2c-merged.mount: Deactivated successfully.
Nov 23 05:13:56 localhost systemd[1]: Started libpod-conmon-87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3.scope.
Nov 23 05:13:56 localhost systemd[1]: Started libcrun container.
Nov 23 05:13:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764e9f4c591fee5fe216306afc142e5b9b46ad6a866194f5188f212ec32cce0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 23 05:13:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764e9f4c591fee5fe216306afc142e5b9b46ad6a866194f5188f212ec32cce0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 23 05:13:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764e9f4c591fee5fe216306afc142e5b9b46ad6a866194f5188f212ec32cce0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 23 05:13:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/764e9f4c591fee5fe216306afc142e5b9b46ad6a866194f5188f212ec32cce0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 23 05:13:56 localhost podman[325595]: 2025-11-23 10:13:56.632014818 +0000 UTC m=+0.048169466 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Nov 23 05:13:56 localhost podman[325595]: 2025-11-23 10:13:56.733284679 +0000 UTC m=+0.149439287 container init 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, distribution-scope=public, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, RELEASE=main, ceph=True, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Nov 23 05:13:56 localhost podman[325595]: 2025-11-23 10:13:56.742031524 +0000 UTC m=+0.158186132 container start 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True)
Nov 23 05:13:56 localhost podman[325595]: 2025-11-23 10:13:56.742189478 +0000 UTC m=+0.158344116 container attach 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55)
Nov 23 05:13:57 localhost vigilant_gould[325610]: [
Nov 23 05:13:57 localhost vigilant_gould[325610]:    {
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "available": false,
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "ceph_device": false,
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "lsm_data": {},
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "lvs": [],
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "path": "/dev/sr0",
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "rejected_reasons": [
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "Has a FileSystem",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "Insufficient space (<5GB)"
Nov 23 05:13:57 localhost vigilant_gould[325610]:        ],
Nov 23 05:13:57 localhost vigilant_gould[325610]:        "sys_api": {
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "actuators": null,
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "device_nodes": "sr0",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "human_readable_size": "482.00 KB",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "id_bus": "ata",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "model": "QEMU DVD-ROM",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "nr_requests": "2",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "partitions": {},
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "path": "/dev/sr0",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "removable": "1",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "rev": "2.5+",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "ro": "0",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "rotational": "1",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "sas_address": "",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "sas_device_handle": "",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "scheduler_mode": "mq-deadline",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "sectors": 0,
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "sectorsize": "2048",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "size": 493568.0,
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "support_discard": "0",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "type": "disk",
Nov 23 05:13:57 localhost vigilant_gould[325610]:            "vendor": "QEMU"
Nov 23 05:13:57 localhost vigilant_gould[325610]:        }
Nov 23 05:13:57 localhost vigilant_gould[325610]:    }
Nov 23 05:13:57 localhost vigilant_gould[325610]: ]
Nov 23 05:13:57 localhost systemd[1]: libpod-87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3.scope: Deactivated successfully.
Nov 23 05:13:57 localhost systemd[1]: libpod-87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3.scope: Consumed 1.171s CPU time.
Nov 23 05:13:57 localhost podman[325595]: 2025-11-23 10:13:57.89909943 +0000 UTC m=+1.315254068 container died 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7)
Nov 23 05:13:57 localhost systemd[1]: var-lib-containers-storage-overlay-764e9f4c591fee5fe216306afc142e5b9b46ad6a866194f5188f212ec32cce0d-merged.mount: Deactivated successfully.
Nov 23 05:13:57 localhost podman[327715]: 2025-11-23 10:13:57.977425706 +0000 UTC m=+0.070529877 container remove 87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_gould, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, io.openshift.expose-services=, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d)
Nov 23 05:13:57 localhost systemd[1]: libpod-conmon-87cd0e9ed3480f6ef21fc171d8b6ca2261ee66449c64f056077fa9263578c3e3.scope: Deactivated successfully.
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Nov 23 05:13:58 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:59 localhost nova_compute[281613]: 2025-11-23 10:13:59.657 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:13:59 localhost ceph-mon[302802]: from='mgr.44369 172.18.0.106:0/4210916137' entity='mgr.np0005532584.naxwxy' 
Nov 23 05:13:59 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:01 localhost nova_compute[281613]: 2025-11-23 10:14:01.138 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:04 localhost nova_compute[281613]: 2025-11-23 10:14:04.700 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:04 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:06 localhost nova_compute[281613]: 2025-11-23 10:14:06.141 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:14:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:14:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:14:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:14:09 localhost podman[327750]: 2025-11-23 10:14:09.208844873 +0000 UTC m=+0.104370476 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 23 05:14:09 localhost podman[327751]: 2025-11-23 10:14:09.250222295 +0000 UTC m=+0.142148611 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:14:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:14:09.280 159429 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:14:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:14:09.281 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:14:09 localhost ovn_metadata_agent[159423]: 2025-11-23 10:14:09.281 159429 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:14:09 localhost podman[327750]: 2025-11-23 10:14:09.316147438 +0000 UTC m=+0.211673041 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:14:09 localhost podman[327748]: 2025-11-23 10:14:09.333649588 +0000 UTC m=+0.231321479 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 05:14:09 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:14:09 localhost podman[327751]: 2025-11-23 10:14:09.362972225 +0000 UTC m=+0.254898551 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 23 05:14:09 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:14:09 localhost podman[327748]: 2025-11-23 10:14:09.414298835 +0000 UTC m=+0.311970766 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 23 05:14:09 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:14:09 localhost podman[327749]: 2025-11-23 10:14:09.502277549 +0000 UTC m=+0.400043822 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 23 05:14:09 localhost podman[327749]: 2025-11-23 10:14:09.514914459 +0000 UTC m=+0.412680712 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 23 05:14:09 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:14:09 localhost nova_compute[281613]: 2025-11-23 10:14:09.702 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:09 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:11 localhost nova_compute[281613]: 2025-11-23 10:14:11.142 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:11 localhost podman[240144]: time="2025-11-23T10:14:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:14:11 localhost podman[240144]: @ - - [23/Nov/2025:10:14:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:14:11 localhost podman[240144]: @ - - [23/Nov/2025:10:14:11 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19237 "" "Go-http-client/1.1"
Nov 23 05:14:13 localhost nova_compute[281613]: 2025-11-23 10:14:13.790 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:14 localhost nova_compute[281613]: 2025-11-23 10:14:14.736 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:14 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:16 localhost nova_compute[281613]: 2025-11-23 10:14:16.157 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:19 localhost nova_compute[281613]: 2025-11-23 10:14:19.740 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:19 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:21 localhost nova_compute[281613]: 2025-11-23 10:14:21.160 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: ERROR   10:14:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: ERROR   10:14:22 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: ERROR   10:14:22 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: ERROR   10:14:22 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: ERROR   10:14:22 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:14:22 localhost openstack_network_exporter[242118]: 
Nov 23 05:14:23 localhost nova_compute[281613]: 2025-11-23 10:14:23.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:23 localhost nova_compute[281613]: 2025-11-23 10:14:23.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 23 05:14:24 localhost nova_compute[281613]: 2025-11-23 10:14:24.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:24 localhost nova_compute[281613]: 2025-11-23 10:14:24.020 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:24 localhost nova_compute[281613]: 2025-11-23 10:14:24.782 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:24 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:25 localhost nova_compute[281613]: 2025-11-23 10:14:25.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:25 localhost nova_compute[281613]: 2025-11-23 10:14:25.018 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 23 05:14:25 localhost nova_compute[281613]: 2025-11-23 10:14:25.019 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 23 05:14:25 localhost nova_compute[281613]: 2025-11-23 10:14:25.033 281617 DEBUG nova.compute.manager [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 23 05:14:25 localhost nova_compute[281613]: 2025-11-23 10:14:25.033 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:14:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:14:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:14:25 localhost podman[327831]: 2025-11-23 10:14:25.179018026 +0000 UTC m=+0.084621355 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, version=9.6, vcs-type=git, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 23 05:14:25 localhost podman[327831]: 2025-11-23 10:14:25.220026188 +0000 UTC m=+0.125629497 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 23 05:14:25 localhost systemd[1]: tmp-crun.xkwoFv.mount: Deactivated successfully.
Nov 23 05:14:25 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:14:25 localhost podman[327832]: 2025-11-23 10:14:25.238229267 +0000 UTC m=+0.140901528 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 23 05:14:25 localhost podman[327832]: 2025-11-23 10:14:25.278081128 +0000 UTC m=+0.180753369 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute)
Nov 23 05:14:25 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:14:25 localhost podman[327833]: 2025-11-23 10:14:25.282117187 +0000 UTC m=+0.181247973 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 23 05:14:25 localhost podman[327833]: 2025-11-23 10:14:25.364925252 +0000 UTC m=+0.264056028 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 23 05:14:25 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:14:26 localhost nova_compute[281613]: 2025-11-23 10:14:26.190 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:27 localhost nova_compute[281613]: 2025-11-23 10:14:27.018 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:28 localhost nova_compute[281613]: 2025-11-23 10:14:28.014 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.019 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.036 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.037 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.037 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.037 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Auditing locally available compute resources for np0005532586.localdomain (node: np0005532586.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.038 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:14:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:14:29 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2604027664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.482 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.720 281617 WARNING nova.virt.libvirt.driver [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.722 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Hypervisor/Node resource view: name=np0005532586.localdomain free_ram=11510MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.722 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.723 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.785 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.801 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.802 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Final resource view: name=np0005532586.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 23 05:14:29 localhost nova_compute[281613]: 2025-11-23 10:14:29.834 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 23 05:14:29 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:30 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 23 05:14:30 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3265261699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Nov 23 05:14:30 localhost nova_compute[281613]: 2025-11-23 10:14:30.323 281617 DEBUG oslo_concurrency.processutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 23 05:14:30 localhost nova_compute[281613]: 2025-11-23 10:14:30.331 281617 DEBUG nova.compute.provider_tree [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed in ProviderTree for provider: 1df367d3-e79d-4d54-9b3c-f6af3beffa8b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 23 05:14:30 localhost nova_compute[281613]: 2025-11-23 10:14:30.350 281617 DEBUG nova.scheduler.client.report [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Inventory has not changed for provider 1df367d3-e79d-4d54-9b3c-f6af3beffa8b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 23 05:14:30 localhost nova_compute[281613]: 2025-11-23 10:14:30.353 281617 DEBUG nova.compute.resource_tracker [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Compute_service record updated for np0005532586.localdomain:np0005532586.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 23 05:14:30 localhost nova_compute[281613]: 2025-11-23 10:14:30.354 281617 DEBUG oslo_concurrency.lockutils [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 23 05:14:31 localhost nova_compute[281613]: 2025-11-23 10:14:31.222 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:33 localhost nova_compute[281613]: 2025-11-23 10:14:33.354 281617 DEBUG oslo_service.periodic_task [None req-8b2b1e5e-a68d-4911-874e-fae3db322b79 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 23 05:14:34 localhost nova_compute[281613]: 2025-11-23 10:14:34.826 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:34 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:36 localhost nova_compute[281613]: 2025-11-23 10:14:36.255 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:39 localhost sshd[327937]: main: sshd: ssh-rsa algorithm is disabled
Nov 23 05:14:39 localhost systemd-logind[761]: New session 82 of user zuul.
Nov 23 05:14:39 localhost systemd[1]: Started Session 82 of User zuul.
Nov 23 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.
Nov 23 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.
Nov 23 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.
Nov 23 05:14:39 localhost podman[327959]: 2025-11-23 10:14:39.536698262 +0000 UTC m=+0.093877494 container health_status bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 23 05:14:39 localhost podman[327959]: 2025-11-23 10:14:39.54818482 +0000 UTC m=+0.105364062 container exec_died bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 23 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.
Nov 23 05:14:39 localhost systemd[1]: bc5c33b0bd9f8a48df76e0e8411ef711499e15d7c4369ae1faf25f8c0a232799.service: Deactivated successfully.
Nov 23 05:14:39 localhost podman[327966]: 2025-11-23 10:14:39.593266313 +0000 UTC m=+0.140578840 container health_status 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 23 05:14:39 localhost podman[327966]: 2025-11-23 10:14:39.599176031 +0000 UTC m=+0.146488528 container exec_died 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:14:39 localhost systemd[1]: 8652369b4f8f6e79e7767afd3768d8ac23e8e806a18ca32fd814c89b3b8ea346.service: Deactivated successfully.
Nov 23 05:14:39 localhost podman[327962]: 2025-11-23 10:14:39.70742348 +0000 UTC m=+0.260132732 container health_status eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 23 05:14:39 localhost podman[328010]: 2025-11-23 10:14:39.679571981 +0000 UTC m=+0.106525673 container health_status aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 23 05:14:39 localhost podman[327962]: 2025-11-23 10:14:39.756984152 +0000 UTC m=+0.309693464 container exec_died eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 23 05:14:39 localhost podman[328010]: 2025-11-23 10:14:39.769817938 +0000 UTC m=+0.196771610 container exec_died aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 23 05:14:39 localhost nova_compute[281613]: 2025-11-23 10:14:39.826 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:39 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:40 localhost systemd[1]: eb54dee69d565c0ebfe3e3030a70ddd1f576ef09eb2dc200e70f887b2467a51c.service: Deactivated successfully.
Nov 23 05:14:40 localhost systemd[1]: aeac3825641c87c621e3d74841d1c1d43af920b072364f60a24d0ea0133393f9.service: Deactivated successfully.
Nov 23 05:14:40 localhost systemd[1]: tmp-crun.59t8Zn.mount: Deactivated successfully.
Nov 23 05:14:41 localhost nova_compute[281613]: 2025-11-23 10:14:41.256 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:41 localhost podman[240144]: time="2025-11-23T10:14:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 23 05:14:41 localhost podman[240144]: @ - - [23/Nov/2025:10:14:41 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156326 "" "Go-http-client/1.1"
Nov 23 05:14:41 localhost podman[240144]: @ - - [23/Nov/2025:10:14:41 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19240 "" "Go-http-client/1.1"
Nov 23 05:14:43 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "status"} v 0)
Nov 23 05:14:43 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3351056879' entity='client.admin' cmd={"prefix": "status"} : dispatch
Nov 23 05:14:44 localhost nova_compute[281613]: 2025-11-23 10:14:44.868 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:44 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:45 localhost ovs-vsctl[328277]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 23 05:14:46 localhost nova_compute[281613]: 2025-11-23 10:14:46.258 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:46 localhost journal[229448]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 23 05:14:46 localhost journal[229448]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 23 05:14:46 localhost journal[229448]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 23 05:14:46 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 328430 (lsinitrd)
Nov 23 05:14:46 localhost systemd[1]: Mounting EFI System Partition Automount...
Nov 23 05:14:46 localhost systemd[1]: Mounted EFI System Partition Automount.
Nov 23 05:14:47 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: cache status {prefix=cache status} (starting...)
Nov 23 05:14:47 localhost lvm[328506]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 23 05:14:47 localhost lvm[328506]: VG ceph_vg1 finished
Nov 23 05:14:47 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: client ls {prefix=client ls} (starting...)
Nov 23 05:14:47 localhost lvm[328519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 23 05:14:47 localhost lvm[328519]: VG ceph_vg0 finished
Nov 23 05:14:47 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: damage ls {prefix=damage ls} (starting...)
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump loads {prefix=dump loads} (starting...)
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 23 05:14:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "report"} v 0)
Nov 23 05:14:48 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4238211147' entity='client.admin' cmd={"prefix": "report"} : dispatch
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 23 05:14:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 23 05:14:48 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4160910027' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 23 05:14:48 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 23 05:14:48 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2941413893' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Nov 23 05:14:48 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 23 05:14:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config log"} v 0)
Nov 23 05:14:49 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1899350493' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Nov 23 05:14:49 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 23 05:14:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 05:14:49 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/168170290' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Nov 23 05:14:49 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: ops {prefix=ops} (starting...)
Nov 23 05:14:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 23 05:14:49 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3769118842' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Nov 23 05:14:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 05:14:49 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3980943465' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Nov 23 05:14:49 localhost nova_compute[281613]: 2025-11-23 10:14:49.871 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:49 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:50 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: session ls {prefix=session ls} (starting...)
Nov 23 05:14:50 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 05:14:50 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/586164529' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Nov 23 05:14:50 localhost ceph-mds[286319]: mds.mds.np0005532586.mfohsb asok_command: status {prefix=status} (starting...)
Nov 23 05:14:50 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 05:14:50 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2095089999' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Nov 23 05:14:50 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "features"} v 0)
Nov 23 05:14:50 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2153661629' entity='client.admin' cmd={"prefix": "features"} : dispatch
Nov 23 05:14:51 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 23 05:14:51 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2650359476' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Nov 23 05:14:51 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 23 05:14:51 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3237286252' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Nov 23 05:14:51 localhost nova_compute[281613]: 2025-11-23 10:14:51.299 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:51 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 05:14:51 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3785737868' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Nov 23 05:14:51 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Nov 23 05:14:52 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1203579280' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: ERROR   10:14:52 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: ERROR   10:14:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: ERROR   10:14:52 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: ERROR   10:14:52 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: ERROR   10:14:52 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 23 05:14:52 localhost openstack_network_exporter[242118]: 
Nov 23 05:14:52 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 23 05:14:52 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1724465177' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Nov 23 05:14:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 23 05:14:53 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4266730576' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Nov 23 05:14:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 23 05:14:53 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/959013144' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Nov 23 05:14:53 localhost ceph-osd[32615]: set_mon_vals no callback set
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 33
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/4027327596,v1:172.18.0.107:6811/4027327596]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875442 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875442 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875442 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 90.127998352s of 90.206550598s, submitted: 17
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157aab/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875810 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98697216 unmapped: 876544 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 34
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/4027327596,v1:172.18.0.107:6811/4027327596]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98246656 unmapped: 1327104 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98246656 unmapped: 1327104 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875810 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98246656 unmapped: 1327104 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875810 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 875810 data_alloc: 184549376 data_used: 16289792
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 heartbeat osd_stat(store_statfs(0x1b98b7000/0x0/0x1bfc00000, data 0x2157bc5/0x21d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98189312 unmapped: 1384448 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 17.670814514s of 17.681776047s, submitted: 3
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 35
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now 
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect Terminating session with v2:172.18.0.107:6810/4027327596
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect No active mgr available yet
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 ms_handle_reset con 0x55720f446400 session 0x557211f15680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98197504 unmapped: 1376256 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 36
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_configure stats_period=5
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b3000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98197504 unmapped: 1376256 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98222080 unmapped: 1351680 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 37
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98222080 unmapped: 1351680 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98222080 unmapped: 1351680 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 38
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 39
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 878734 data_alloc: 184549376 data_used: 16297984
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 heartbeat osd_stat(store_statfs(0x1b98b4000/0x0/0x1bfc00000, data 0x215a0cf/0x21da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98263040 unmapped: 1310720 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 82.557228088s of 82.633010864s, submitted: 16
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 40
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now 
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect Terminating session with v2:172.18.0.108:6810/335107178
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect No active mgr available yet
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 ms_handle_reset con 0x557211caec00 session 0x557211f870e0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 883334 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98394112 unmapped: 1179648 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 41
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc reconnect Starting new session with [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_configure stats_period=5
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98492416 unmapped: 1081344 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98af000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98500608 unmapped: 1073152 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 42
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98500608 unmapped: 1073152 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98500608 unmapped: 1073152 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 43
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 882054 data_alloc: 184549376 data_used: 16306176
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 heartbeat osd_stat(store_statfs(0x1b98b0000/0x0/0x1bfc00000, data 0x215c8a1/0x21de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98287616 unmapped: 1286144 heap: 99573760 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 84.408142090s of 84.484458923s, submitted: 17
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98344960 unmapped: 2277376 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 887307 data_alloc: 184549376 data_used: 16318464
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98361344 unmapped: 2260992 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98402304 unmapped: 2220032 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98402304 unmapped: 2220032 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98402304 unmapped: 2220032 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a6000/0x0/0x1bfc00000, data 0x2160f94/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98402304 unmapped: 2220032 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98402304 unmapped: 2220032 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 44
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 890683 data_alloc: 184549376 data_used: 16330752
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 891003 data_alloc: 184549376 data_used: 16338944
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98549760 unmapped: 2072576 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b98a7000/0x0/0x1bfc00000, data 0x21610ae/0x21e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 67.185173035s of 67.267364502s, submitted: 19
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 90 ms_handle_reset con 0x557210b37400 session 0x55721211fa40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98598912 unmapped: 2023424 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98598912 unmapped: 2023424 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98598912 unmapped: 2023424 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98598912 unmapped: 2023424 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98a0000/0x0/0x1bfc00000, data 0x2163449/0x21ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 898526 data_alloc: 184549376 data_used: 16351232
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98598912 unmapped: 2023424 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98a0000/0x0/0x1bfc00000, data 0x2163449/0x21ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98639872 unmapped: 1982464 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 ms_handle_reset con 0x557210b37c00 session 0x55721239cd20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 899445 data_alloc: 184549376 data_used: 16367616
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b989e000/0x0/0x1bfc00000, data 0x21657d2/0x21ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98672640 unmapped: 1949696 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b989e000/0x0/0x1bfc00000, data 0x21657d2/0x21ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.881366730s of 13.039980888s, submitted: 43
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 91 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 98721792 unmapped: 1900544 heap: 100622336 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 917694 data_alloc: 184549376 data_used: 16379904
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffc400 session 0x55721239c960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 99999744 unmapped: 5873664 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100007936 unmapped: 5865472 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100007936 unmapped: 5865472 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9484000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9484000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100007936 unmapped: 5865472 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffc800 session 0x55721239c5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 938387 data_alloc: 184549376 data_used: 16379904
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9485000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 938387 data_alloc: 184549376 data_used: 16379904
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9485000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9485000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 938707 data_alloc: 184549376 data_used: 16388096
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b9485000/0x0/0x1bfc00000, data 0x257ba92/0x2609000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100024320 unmapped: 5849088 heap: 105873408 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 18.971441269s of 19.194061279s, submitted: 50
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37400 session 0x55721239c3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 97673216 unmapped: 15032320 heap: 112705536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 97673216 unmapped: 15032320 heap: 112705536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1021031 data_alloc: 184549376 data_used: 13766656
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 97681408 unmapped: 15024128 heap: 112705536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b89b2000/0x0/0x1bfc00000, data 0x304ea92/0x30dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37c00 session 0x557211c32b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b89b2000/0x0/0x1bfc00000, data 0x304ea92/0x30dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 99729408 unmapped: 12976128 heap: 112705536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 99573760 unmapped: 13131776 heap: 112705536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 103677952 unmapped: 12697600 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffc400 session 0x557211c32960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x5572131f5400 session 0x55721239de00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffd000 session 0x55721239da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37400 session 0x557211f141e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37c00 session 0x557211bec960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 99778560 unmapped: 16596992 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1187281 data_alloc: 184549376 data_used: 13766656
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffc400 session 0x5572106f83c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100687872 unmapped: 15687680 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffd000 session 0x5572106f8d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 100876288 unmapped: 15499264 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557211caec00 session 0x5572106fb2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b71a1000/0x0/0x1bfc00000, data 0x485cb14/0x48ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 101244928 unmapped: 15130624 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37400 session 0x5572106f8b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 103776256 unmapped: 12599296 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 106946560 unmapped: 9428992 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1216341 data_alloc: 184549376 data_used: 21291008
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 106962944 unmapped: 9412608 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.802199364s of 12.674175262s, submitted: 188
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37c00 session 0x557211bdef00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b7be8000/0x0/0x1bfc00000, data 0x3e16ab2/0x3ea6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 106987520 unmapped: 9388032 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 107266048 unmapped: 9109504 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 107831296 unmapped: 8544256 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 107880448 unmapped: 8495104 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b7be8000/0x0/0x1bfc00000, data 0x3e16ab2/0x3ea6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1222551 data_alloc: 184549376 data_used: 21839872
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 107880448 unmapped: 8495104 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 109715456 unmapped: 6660096 heap: 116375552 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 114114560 unmapped: 3317760 heap: 117432320 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 2285568 heap: 117432320 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b6e3f000/0x0/0x1bfc00000, data 0x4bbfab2/0x4c4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b6e3f000/0x0/0x1bfc00000, data 0x4bbfab2/0x4c4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 2277376 heap: 117432320 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1394777 data_alloc: 184549376 data_used: 23777280
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffc400 session 0x557211bdeb40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118939648 unmapped: 2695168 heap: 121634816 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.178408623s of 10.864701271s, submitted: 188
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119709696 unmapped: 2973696 heap: 122683392 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119717888 unmapped: 2965504 heap: 122683392 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 120537088 unmapped: 2146304 heap: 122683392 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b631e000/0x0/0x1bfc00000, data 0x56e0ab2/0x5770000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118456320 unmapped: 4227072 heap: 122683392 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1515149 data_alloc: 184549376 data_used: 23818240
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119922688 unmapped: 7798784 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118824960 unmapped: 8896512 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557213ffd000 session 0x557211d9b860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118718464 unmapped: 9003008 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x5572136b2c00 session 0x557211e01c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118718464 unmapped: 9003008 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b5d8c000/0x0/0x1bfc00000, data 0x51a0a30/0x522d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119013376 unmapped: 8708096 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1378450 data_alloc: 184549376 data_used: 23261184
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119013376 unmapped: 8708096 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.446758270s of 10.086778641s, submitted: 170
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x55720f447800 session 0x55721239c5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210890000 session 0x5572106cda40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119054336 unmapped: 8667136 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 ms_handle_reset con 0x557210b37400 session 0x557211f14f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119062528 unmapped: 8658944 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 93 ms_handle_reset con 0x557210b37c00 session 0x557211be25a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 93 ms_handle_reset con 0x557213ffd000 session 0x55721213bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119619584 unmapped: 8101888 heap: 127721472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b684f000/0x0/0x1bfc00000, data 0x51afd98/0x523e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 93 ms_handle_reset con 0x55720f447800 session 0x557211bdeb40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 123985920 unmapped: 17342464 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1500579 data_alloc: 184549376 data_used: 24973312
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 94 ms_handle_reset con 0x557210890000 session 0x5572106f81e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 123748352 unmapped: 17580032 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 ms_handle_reset con 0x5572133a3000 session 0x5572120fdc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 heartbeat osd_stat(store_statfs(0x1b5b81000/0x0/0x1bfc00000, data 0x5e7c154/0x5f0c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 ms_handle_reset con 0x557213ffc000 session 0x55721211e000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 123838464 unmapped: 17489920 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 ms_handle_reset con 0x557210679c00 session 0x55721239da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 ms_handle_reset con 0x557210679c00 session 0x55721239cb40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 122650624 unmapped: 18677760 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 ms_handle_reset con 0x55720f447800 session 0x55721239c3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117506048 unmapped: 23822336 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 96 ms_handle_reset con 0x557210890000 session 0x557211d9ad20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117530624 unmapped: 23797760 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b73eb000/0x0/0x1bfc00000, data 0x460f55d/0x46a3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1301000 data_alloc: 184549376 data_used: 19206144
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 96 ms_handle_reset con 0x5572131f5400 session 0x557211bede00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117530624 unmapped: 23797760 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 ms_handle_reset con 0x5572133a3000 session 0x55720f98ed20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 ms_handle_reset con 0x55720f447800 session 0x557210c665a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 ms_handle_reset con 0x5572131f5400 session 0x557211be30e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 ms_handle_reset con 0x557210890000 session 0x5572106f83c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.276107788s of 10.001654625s, submitted: 174
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x557213ffc000 session 0x55721239c5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x5572100ec400 session 0x55721239da40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 112123904 unmapped: 29204480 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x55720f447800 session 0x55721239cb40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x557210679c00 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x557210890000 session 0x557210c66f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 ms_handle_reset con 0x5572131f5400 session 0x557210c670e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119144448 unmapped: 22183936 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 heartbeat osd_stat(store_statfs(0x1b8bb3000/0x0/0x1bfc00000, data 0x2e3f0b2/0x2ed9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 99 heartbeat osd_stat(store_statfs(0x1b7875000/0x0/0x1bfc00000, data 0x418008f/0x4219000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 99 ms_handle_reset con 0x557213ffc000 session 0x55721239cd20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119201792 unmapped: 22126592 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 99 ms_handle_reset con 0x557213ffc000 session 0x557210704000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119406592 unmapped: 21921792 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1271833 data_alloc: 184549376 data_used: 16732160
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 100 ms_handle_reset con 0x55720f447800 session 0x557210c674a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115703808 unmapped: 25624576 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b8511000/0x0/0x1bfc00000, data 0x34de7dd/0x357a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost systemd-journald[47537]: Data hash table of /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal has a fill level at 75.0 (53724 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Nov 23 05:14:53 localhost systemd-journald[47537]: /run/log/journal/6e0090cd4cf296f54418e234b90f721c/system.journal: Journal header limits reached or header out-of-date, rotating.
Nov 23 05:14:53 localhost rsyslogd[760]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115703808 unmapped: 25624576 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115703808 unmapped: 25624576 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115703808 unmapped: 25624576 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 101 heartbeat osd_stat(store_statfs(0x1b8511000/0x0/0x1bfc00000, data 0x34de7dd/0x357a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113295360 unmapped: 28033024 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1162183 data_alloc: 184549376 data_used: 13041664
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113295360 unmapped: 28033024 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113295360 unmapped: 28033024 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113295360 unmapped: 28033024 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.396457672s of 11.998228073s, submitted: 171
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113311744 unmapped: 28016640 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113311744 unmapped: 28016640 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b850b000/0x0/0x1bfc00000, data 0x34e2c95/0x3582000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1165345 data_alloc: 184549376 data_used: 13045760
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113319936 unmapped: 28008448 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113319936 unmapped: 28008448 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113410048 unmapped: 27918336 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113385472 unmapped: 27942912 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113385472 unmapped: 27942912 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1172425 data_alloc: 184549376 data_used: 13410304
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b8504000/0x0/0x1bfc00000, data 0x34eac95/0x358a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 102 ms_handle_reset con 0x557210679c00 session 0x557210c67a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 113385472 unmapped: 27942912 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 ms_handle_reset con 0x5572131f5400 session 0x55721213a000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 ms_handle_reset con 0x5572133a2c00 session 0x55721213a960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 122126336 unmapped: 19202048 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 ms_handle_reset con 0x55720f447800 session 0x55721213a3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116203520 unmapped: 25124864 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 104 ms_handle_reset con 0x557210679c00 session 0x557212ed92c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116211712 unmapped: 25116672 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.268059731s of 10.738009453s, submitted: 133
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116228096 unmapped: 25100288 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1342824 data_alloc: 184549376 data_used: 15781888
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 105 heartbeat osd_stat(store_statfs(0x1b7346000/0x0/0x1bfc00000, data 0x469d7e2/0x4746000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116252672 unmapped: 25075712 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 106 ms_handle_reset con 0x557213ffc000 session 0x5572108fbe00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 106 ms_handle_reset con 0x5572131f5400 session 0x557212ed8f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 114130944 unmapped: 27197440 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 114130944 unmapped: 27197440 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 114147328 unmapped: 27181056 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 114147328 unmapped: 27181056 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1187262 data_alloc: 184549376 data_used: 12529664
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 107 ms_handle_reset con 0x557210891c00 session 0x55721213bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118702080 unmapped: 22626304 heap: 141328384 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 108 ms_handle_reset con 0x55720f447800 session 0x55721213be00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 108 ms_handle_reset con 0x5572133a3800 session 0x55721211e960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b7b06000/0x0/0x1bfc00000, data 0x3adc600/0x3b88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 126124032 unmapped: 23388160 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 126304256 unmapped: 23207936 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 109 ms_handle_reset con 0x5572131f5400 session 0x557211f86f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 109 ms_handle_reset con 0x557213ffc000 session 0x557212ed8d20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 110 heartbeat osd_stat(store_statfs(0x1b68b8000/0x0/0x1bfc00000, data 0x4d2651e/0x4dd3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 126402560 unmapped: 23109632 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.186733246s of 10.183445930s, submitted: 237
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118546432 unmapped: 30965760 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 ms_handle_reset con 0x5572137f1800 session 0x557211e01c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b68b3000/0x0/0x1bfc00000, data 0x4d28788/0x4dd7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1288255 data_alloc: 184549376 data_used: 13787136
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118661120 unmapped: 30851072 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 ms_handle_reset con 0x5572136b3c00 session 0x55720fdbc1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 ms_handle_reset con 0x5572131f4400 session 0x55721211f680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 120537088 unmapped: 28975104 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 ms_handle_reset con 0x5572137f1800 session 0x5572118c9860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1077735 data_alloc: 184549376 data_used: 11038720
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 31948800 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117571584 unmapped: 31940608 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117579776 unmapped: 31932416 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117587968 unmapped: 31924224 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117596160 unmapped: 31916032 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b944a000/0x0/0x1bfc00000, data 0x2193cee/0x2243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1078055 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117604352 unmapped: 31907840 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 82.815246582s of 83.078865051s, submitted: 116
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b9445000/0x0/0x1bfc00000, data 0x219605e/0x2248000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117620736 unmapped: 31891456 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 114 ms_handle_reset con 0x55720f447800 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116744192 unmapped: 32768000 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116752384 unmapped: 32759808 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1090101 data_alloc: 184549376 data_used: 11046912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 114 ms_handle_reset con 0x5572131f5400 session 0x557211bdfe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116760576 unmapped: 32751616 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 114 ms_handle_reset con 0x55720f447800 session 0x5572106aa5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116047872 unmapped: 33464320 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b9442000/0x0/0x1bfc00000, data 0x2198425/0x224c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116047872 unmapped: 33464320 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1092236 data_alloc: 184549376 data_used: 11059200
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1092236 data_alloc: 184549376 data_used: 11059200
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116113408 unmapped: 33398784 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1092236 data_alloc: 184549376 data_used: 11059200
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 18.619358063s of 18.771129608s, submitted: 48
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572131f4400 session 0x5572106f9860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116146176 unmapped: 33366016 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116146176 unmapped: 33366016 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943c000/0x0/0x1bfc00000, data 0x219a6e5/0x2252000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116146176 unmapped: 33366016 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572136b3c00 session 0x5572106f85a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572137f1800 session 0x5572120fd2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116146176 unmapped: 33366016 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572133a3800 session 0x55721213a000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x55720f447800 session 0x557210befe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 34406400 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1176073 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572131f4400 session 0x5572107041e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 34365440 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572136b3c00 session 0x557210704f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943e000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1099191 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943e000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1099191 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943e000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 34349056 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 15.930266380s of 16.155052185s, submitted: 49
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572137f1800 session 0x5572108fa000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115195904 unmapped: 34316288 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115195904 unmapped: 34316288 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x557213ffc000 session 0x55721383b860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115195904 unmapped: 34316288 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x55720f447800 session 0x55721383bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a683/0x2251000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115187712 unmapped: 34324480 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1101028 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572131f4400 session 0x55721383a960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943e000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115195904 unmapped: 34316288 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572136b3c00 session 0x55721383ab40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115220480 unmapped: 34291712 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572137f1800 session 0x55721383ad20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115220480 unmapped: 34291712 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115220480 unmapped: 34291712 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1102993 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1102993 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115228672 unmapped: 34283520 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1102993 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 18.396337509s of 18.583156586s, submitted: 48
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x557211f67000 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943d000/0x0/0x1bfc00000, data 0x219a673/0x2250000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x55720f447800 session 0x557211e014a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x557211f67000 session 0x557211e01a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572131f4400 session 0x557214cb52c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115236864 unmapped: 34275328 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b943c000/0x0/0x1bfc00000, data 0x219a6e5/0x2252000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1104638 data_alloc: 184549376 data_used: 11063296
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572136b3c00 session 0x557214cb4f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x5572137f1800 session 0x557214cb4d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115245056 unmapped: 34267136 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8101 writes, 33K keys, 8101 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8101 writes, 2072 syncs, 3.91 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3030 writes, 10K keys, 3030 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s#012Interval WAL: 3030 writes, 1321 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115245056 unmapped: 34267136 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115245056 unmapped: 34267136 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 ms_handle_reset con 0x55720f447800 session 0x557214cb4960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115245056 unmapped: 34267136 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 34258944 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1108848 data_alloc: 184549376 data_used: 11075584
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 116 heartbeat osd_stat(store_statfs(0x1b943c000/0x0/0x1bfc00000, data 0x219aa83/0x2252000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.439858437s of 10.543260574s, submitted: 26
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115294208 unmapped: 34217984 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 116 heartbeat osd_stat(store_statfs(0x1b9436000/0x0/0x1bfc00000, data 0x219cdfb/0x2257000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115294208 unmapped: 34217984 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115302400 unmapped: 34209792 heap: 149512192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115318784 unmapped: 42590208 heap: 157908992 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115359744 unmapped: 50946048 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1440063 data_alloc: 184549376 data_used: 11075584
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115392512 unmapped: 50913280 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 116 heartbeat osd_stat(store_statfs(0x1b5c37000/0x0/0x1bfc00000, data 0x599cdfb/0x5a57000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 123789312 unmapped: 42516480 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 117 ms_handle_reset con 0x557211f67000 session 0x557214cb4780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115425280 unmapped: 50880512 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115449856 unmapped: 50855936 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 118 ms_handle_reset con 0x5572131f4400 session 0x557214cb43c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115499008 unmapped: 50806784 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1127146 data_alloc: 184549376 data_used: 11100160
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.611950874s of 10.031038284s, submitted: 84
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 119 ms_handle_reset con 0x5572136b3c00 session 0x557211bed2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115515392 unmapped: 50790400 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115515392 unmapped: 50790400 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 119 heartbeat osd_stat(store_statfs(0x1b942c000/0x0/0x1bfc00000, data 0x21a34bb/0x2260000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115515392 unmapped: 50790400 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115515392 unmapped: 50790400 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115597312 unmapped: 50708480 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1135583 data_alloc: 184549376 data_used: 11116544
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 120 ms_handle_reset con 0x557211f67c00 session 0x557211bec960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115630080 unmapped: 50675712 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 121 ms_handle_reset con 0x55720f447800 session 0x5572120fc960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115679232 unmapped: 50626560 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 121 ms_handle_reset con 0x557211f67000 session 0x55721213b4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 122 ms_handle_reset con 0x5572131f4400 session 0x55721213ba40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 123 ms_handle_reset con 0x5572136b3c00 session 0x55721213be00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115728384 unmapped: 50577408 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 123 heartbeat osd_stat(store_statfs(0x1b9416000/0x0/0x1bfc00000, data 0x21ad07a/0x2276000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115728384 unmapped: 50577408 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 124 ms_handle_reset con 0x557212181400 session 0x5572106f85a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115744768 unmapped: 50561024 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1163113 data_alloc: 184549376 data_used: 11116544
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 ms_handle_reset con 0x55720f447800 session 0x5572106aa780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 heartbeat osd_stat(store_statfs(0x1b940e000/0x0/0x1bfc00000, data 0x21b03e2/0x227d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.768001556s of 10.019982338s, submitted: 66
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 ms_handle_reset con 0x557212181400 session 0x5572120fc780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 ms_handle_reset con 0x557211f67000 session 0x5572106aa1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115728384 unmapped: 50577408 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 ms_handle_reset con 0x5572131f4400 session 0x5572106ffa40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115744768 unmapped: 50561024 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 126 ms_handle_reset con 0x5572136b3c00 session 0x5572106fe960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115810304 unmapped: 50495488 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 126 ms_handle_reset con 0x557212181400 session 0x55721383b2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115818496 unmapped: 50487296 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 127 ms_handle_reset con 0x55720f447800 session 0x5572106fed20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 115892224 unmapped: 50413568 heap: 166305792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1179527 data_alloc: 184549376 data_used: 11128832
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 heartbeat osd_stat(store_statfs(0x1b93ff000/0x0/0x1bfc00000, data 0x21b87db/0x228d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 ms_handle_reset con 0x5572131f4400 session 0x5572106ff680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 116703232 unmapped: 53805056 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 ms_handle_reset con 0x5572136b3400 session 0x5572150e63c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 ms_handle_reset con 0x5572137f2c00 session 0x5572150e65a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 121503744 unmapped: 49004544 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 130 ms_handle_reset con 0x5572133a3c00 session 0x5572150e61e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 117563392 unmapped: 52944896 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 118005760 unmapped: 52502528 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2757297 data_alloc: 184549376 data_used: 11141120
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 ms_handle_reset con 0x55720f447800 session 0x5572150e6780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 heartbeat osd_stat(store_statfs(0x1ab3f7000/0x0/0x1bfc00000, data 0x101be4f6/0x10297000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.249619484s of 10.030437469s, submitted: 327
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 122454016 unmapped: 48054272 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 132 heartbeat osd_stat(store_statfs(0x1a97f7000/0x0/0x1bfc00000, data 0x11dbe4d3/0x11e96000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 119922688 unmapped: 50585600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 132 ms_handle_reset con 0x557212181400 session 0x5572150e72c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124583936 unmapped: 45924352 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 120586240 unmapped: 49922048 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 heartbeat osd_stat(store_statfs(0x1a5ff0000/0x0/0x1bfc00000, data 0x155c2b2a/0x1569d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125206528 unmapped: 45301760 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 heartbeat osd_stat(store_statfs(0x1a4bf0000/0x0/0x1bfc00000, data 0x169c2b2a/0x16a9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3867973 data_alloc: 184549376 data_used: 11165696
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125468672 unmapped: 45039616 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 heartbeat osd_stat(store_statfs(0x19f7f1000/0x0/0x1bfc00000, data 0x1bdc2b2a/0x1be9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138625024 unmapped: 31883264 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 ms_handle_reset con 0x5572131f4400 session 0x55721383bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 135905280 unmapped: 34603008 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x5572136b3400 session 0x5572108fb680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128950272 unmapped: 41558016 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x5572137f2400 session 0x557211d9b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557211f67000 session 0x5572106f94a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x55720f447800 session 0x557210704f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557212181400 session 0x5572106fe960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124788736 unmapped: 45719552 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x5572131f4400 session 0x5572107041e0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4960521 data_alloc: 184549376 data_used: 11177984
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 7.635313511s of 10.002345085s, submitted: 349
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125583360 unmapped: 44924928 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557211f67000 session 0x5572106ff680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x55720f447800 session 0x557210befe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125042688 unmapped: 45465600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 heartbeat osd_stat(store_statfs(0x1b63ea000/0x0/0x1bfc00000, data 0x21c4e4c/0x22a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557212181400 session 0x557210c66f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x5572137f2400 session 0x5572106abc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557212180800 session 0x557211d9c960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x5572133a3c00 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557210891800 session 0x5572120fd4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124870656 unmapped: 45637632 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 ms_handle_reset con 0x557211f67000 session 0x55721211e3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 134 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 ms_handle_reset con 0x55720f447800 session 0x55720fdd6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 ms_handle_reset con 0x557212180800 session 0x557211c33c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124887040 unmapped: 45621248 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 ms_handle_reset con 0x55720f447800 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 ms_handle_reset con 0x557210891800 session 0x5572108fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124903424 unmapped: 45604864 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1347557 data_alloc: 184549376 data_used: 11186176
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 136 ms_handle_reset con 0x557211f67000 session 0x55720f98ed20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124919808 unmapped: 45588480 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b8fec000/0x0/0x1bfc00000, data 0x21c6ffe/0x22a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124919808 unmapped: 45588480 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 136 ms_handle_reset con 0x557212180800 session 0x5572108fa000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 136 ms_handle_reset con 0x5572133a3c00 session 0x557214cb41e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124903424 unmapped: 45604864 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 137 ms_handle_reset con 0x55720f447800 session 0x557214cb4780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124944384 unmapped: 45563904 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 124944384 unmapped: 45563904 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1357150 data_alloc: 184549376 data_used: 11186176
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x557210891800 session 0x5572120fd0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 heartbeat osd_stat(store_statfs(0x1b8fe3000/0x0/0x1bfc00000, data 0x21cb7d8/0x22aa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125001728 unmapped: 45506560 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x557212180800 session 0x55721383b4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x557211f67000 session 0x5572120fda40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x557212181400 session 0x557212ed81e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125018112 unmapped: 45490176 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x55720f447800 session 0x557212ed92c0
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.652448654s of 11.612936974s, submitted: 299
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 ms_handle_reset con 0x557210891800 session 0x557212ed90e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125018112 unmapped: 45490176 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 139 ms_handle_reset con 0x557211f67000 session 0x557212ed94a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b8fda000/0x0/0x1bfc00000, data 0x21cfe0c/0x22b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 ms_handle_reset con 0x557212180800 session 0x5572106fe1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125059072 unmapped: 45449216 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b8fd6000/0x0/0x1bfc00000, data 0x21d2166/0x22b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 ms_handle_reset con 0x5572137f2400 session 0x55721213b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 ms_handle_reset con 0x55720f447800 session 0x5572120fd4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125067264 unmapped: 45441024 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1364747 data_alloc: 184549376 data_used: 11198464
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b8fd9000/0x0/0x1bfc00000, data 0x21d2104/0x22b5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 ms_handle_reset con 0x557210891800 session 0x5572120fc000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125067264 unmapped: 45441024 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125067264 unmapped: 45441024 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557211f67000 session 0x557211bec1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125083648 unmapped: 45424640 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557212257800 session 0x55721372fc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557212180800 session 0x557211bf6780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125075456 unmapped: 45432832 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b8fd1000/0x0/0x1bfc00000, data 0x21d455c/0x22bc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x55720f447800 session 0x5572106cde00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125091840 unmapped: 45416448 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557210891800 session 0x5572106cd680
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1374943 data_alloc: 184549376 data_used: 11210752
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557211f67000 session 0x5572106fa3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 ms_handle_reset con 0x557212180800 session 0x55721372fe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125100032 unmapped: 45408256 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 142 ms_handle_reset con 0x557212257800 session 0x557211d9c960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125124608 unmapped: 45383680 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.925375938s of 10.387001038s, submitted: 131
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x55720f447800 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125132800 unmapped: 45375488 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557210891800 session 0x55720fdd6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557211f67000 session 0x55721211e3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557212232800 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557212180800 session 0x5572106fe960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125173760 unmapped: 45334528 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557212180800 session 0x5572106abc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x55720f447800 session 0x557210befe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125173760 unmapped: 45334528 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557210891800 session 0x5572107041e0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1379721 data_alloc: 184549376 data_used: 11223040
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 heartbeat osd_stat(store_statfs(0x1b8fcd000/0x0/0x1bfc00000, data 0x21d8c11/0x22c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557211f67000 session 0x5572106f94a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125190144 unmapped: 45318144 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x557212232800 session 0x5572108fb680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 ms_handle_reset con 0x55720f447800 session 0x55721383bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125222912 unmapped: 45285376 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 ms_handle_reset con 0x557210891800 session 0x5572150e65a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 handle_osd_map epochs [144,145], i have 145, src has [1,145]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 ms_handle_reset con 0x557211f67000 session 0x5572150e61e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1384601 data_alloc: 184549376 data_used: 11235328
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 ms_handle_reset con 0x557211f67400 session 0x5572150e6780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b8fc5000/0x0/0x1bfc00000, data 0x21dd299/0x22c9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125272064 unmapped: 45236224 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 ms_handle_reset con 0x55721217e800 session 0x557210c67680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.665000916s of 10.008132935s, submitted: 101
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125280256 unmapped: 45228032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x55720f447800 session 0x557210c672c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x557211f67000 session 0x5572106fe5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x557210891800 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127639552 unmapped: 42868736 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x557211f67400 session 0x5572106cde00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x55721217e800 session 0x5572120fd4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125689856 unmapped: 44818432 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1467822 data_alloc: 184549376 data_used: 11268096
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b87ec000/0x0/0x1bfc00000, data 0x29b24f7/0x2aa1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125739008 unmapped: 44769280 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 ms_handle_reset con 0x557210891800 session 0x5572134901e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125689856 unmapped: 44818432 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 147 ms_handle_reset con 0x557211f67400 session 0x557213490000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 125968384 unmapped: 44539904 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 147 heartbeat osd_stat(store_statfs(0x1b7be7000/0x0/0x1bfc00000, data 0x35b2933/0x36a6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 148 ms_handle_reset con 0x557211f67000 session 0x55720fdd61e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 148 ms_handle_reset con 0x55720f447800 session 0x5572120fc000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 126033920 unmapped: 44474368 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 149 ms_handle_reset con 0x5572133a2400 session 0x55721213b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127090688 unmapped: 43417600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 149 ms_handle_reset con 0x557210891800 session 0x55721211fa40
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1599454 data_alloc: 184549376 data_used: 11280384
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 handle_osd_map epochs [149,150], i have 150, src has [1,150]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 ms_handle_reset con 0x55720f447800 session 0x55720fdd6f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127139840 unmapped: 43368448 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 45
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 ms_handle_reset con 0x557211f67400 session 0x5572134910e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 ms_handle_reset con 0x5572136b2400 session 0x557213491860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127328256 unmapped: 43180032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 ms_handle_reset con 0x557210678000 session 0x55720fdd7e00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.260550499s of 10.119343758s, submitted: 186
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 150 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 ms_handle_reset con 0x557210892c00 session 0x557213491a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 ms_handle_reset con 0x5572107e0c00 session 0x557211bed680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 ms_handle_reset con 0x557210891800 session 0x557214b18f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 ms_handle_reset con 0x55720f447800 session 0x55721213a1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127533056 unmapped: 42975232 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 ms_handle_reset con 0x557211f67000 session 0x55721383be00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 152 ms_handle_reset con 0x557211f67400 session 0x5572150e7680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5cb6000/0x0/0x1bfc00000, data 0x433877e/0x4437000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127590400 unmapped: 42917888 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x55720f447800 session 0x55721239de00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127451136 unmapped: 43057152 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1690778 data_alloc: 184549376 data_used: 11309056
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x5572107e0c00 session 0x557210c661e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x557210891800 session 0x5572106fbe00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x557210892c00 session 0x557211d9ad20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127492096 unmapped: 43016192 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x55720f447800 session 0x5572106f8d20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 ms_handle_reset con 0x5572107e0c00 session 0x5572106f94a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127451136 unmapped: 43057152 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 154 ms_handle_reset con 0x557210891800 session 0x55721383a3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127492096 unmapped: 43016192 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b7612000/0x0/0x1bfc00000, data 0x29daadc/0x2ada000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 ms_handle_reset con 0x557211f67400 session 0x557211d9d4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127541248 unmapped: 42967040 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 ms_handle_reset con 0x5572136b2400 session 0x557211f86f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 ms_handle_reset con 0x55720f447800 session 0x5572106fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127614976 unmapped: 42893312 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 157 ms_handle_reset con 0x5572107e0c00 session 0x557211d9a1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1462145 data_alloc: 184549376 data_used: 11313152
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127614976 unmapped: 42893312 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 158 ms_handle_reset con 0x557210891800 session 0x557211e00d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127631360 unmapped: 42876928 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 158 ms_handle_reset con 0x557211f67400 session 0x55721372f680
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 46
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.977458954s of 10.040865898s, submitted: 323
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 ms_handle_reset con 0x55721339e800 session 0x55720fdd6000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127655936 unmapped: 42852352 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 heartbeat osd_stat(store_statfs(0x1b7dcb000/0x0/0x1bfc00000, data 0x2219c94/0x2320000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 ms_handle_reset con 0x55720f447800 session 0x55720fdd63c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 ms_handle_reset con 0x5572107e0c00 session 0x55720fdd74a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127664128 unmapped: 42844160 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127664128 unmapped: 42844160 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1466714 data_alloc: 184549376 data_used: 11325440
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127664128 unmapped: 42844160 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 heartbeat osd_stat(store_statfs(0x1b7dd0000/0x0/0x1bfc00000, data 0x2219b97/0x231e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127664128 unmapped: 42844160 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127688704 unmapped: 42819584 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 127688704 unmapped: 42819584 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 ms_handle_reset con 0x557210891800 session 0x5572107045a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 heartbeat osd_stat(store_statfs(0x1b7dc8000/0x0/0x1bfc00000, data 0x221f05f/0x2325000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128262144 unmapped: 42246144 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1536700 data_alloc: 184549376 data_used: 11337728
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 heartbeat osd_stat(store_statfs(0x1b75a4000/0x0/0x1bfc00000, data 0x2a4405f/0x2b4a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128262144 unmapped: 42246144 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128262144 unmapped: 42246144 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 ms_handle_reset con 0x557211f67400 session 0x557210704000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128237568 unmapped: 42270720 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.296886444s of 10.553485870s, submitted: 117
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x55721339e800 session 0x55721213af00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128270336 unmapped: 42237952 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x55720f447800 session 0x55721213b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x5572107e0c00 session 0x5572108fb680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 heartbeat osd_stat(store_statfs(0x1b759f000/0x0/0x1bfc00000, data 0x2a463c7/0x2b4e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128270336 unmapped: 42237952 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x557210891800 session 0x5572108fa780
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1541214 data_alloc: 184549376 data_used: 11350016
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x5572131f5c00 session 0x557211beda40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 ms_handle_reset con 0x557212256000 session 0x5572120fcd20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 ms_handle_reset con 0x557211f67400 session 0x5572108fa960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128319488 unmapped: 42188800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 ms_handle_reset con 0x557212256000 session 0x5572120fc3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 ms_handle_reset con 0x55720f447800 session 0x5572120fc1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 128335872 unmapped: 42172416 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 heartbeat osd_stat(store_statfs(0x1b759a000/0x0/0x1bfc00000, data 0x2a487a6/0x2b53000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1552896 data_alloc: 184549376 data_used: 12148736
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b7595000/0x0/0x1bfc00000, data 0x2a4b7d4/0x2b58000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129392640 unmapped: 41115648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b7595000/0x0/0x1bfc00000, data 0x2a4b7d4/0x2b58000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129400832 unmapped: 41107456 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1555498 data_alloc: 184549376 data_used: 12152832
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129400832 unmapped: 41107456 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129400832 unmapped: 41107456 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 129400832 unmapped: 41107456 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.703496933s of 14.988422394s, submitted: 76
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b65b7000/0x0/0x1bfc00000, data 0x3a2a7d4/0x3b37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 134512640 unmapped: 35995648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137756672 unmapped: 32751616 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1688218 data_alloc: 184549376 data_used: 13398016
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b64ce000/0x0/0x1bfc00000, data 0x3b137d4/0x3c20000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 ms_handle_reset con 0x557210891800 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1691106 data_alloc: 184549376 data_used: 13402112
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b64ce000/0x0/0x1bfc00000, data 0x3b137d4/0x3c20000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136478720 unmapped: 34029568 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 164 ms_handle_reset con 0x5572131f5c00 session 0x557214cb4d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136511488 unmapped: 33996800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136511488 unmapped: 33996800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.503668785s of 10.456348419s, submitted: 149
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 164 heartbeat osd_stat(store_statfs(0x1b64c3000/0x0/0x1bfc00000, data 0x3b18be4/0x3c2a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136511488 unmapped: 33996800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 164 ms_handle_reset con 0x557211f67400 session 0x557212ed9c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 165 ms_handle_reset con 0x55720f447800 session 0x557214cb54a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136560640 unmapped: 33947648 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1707900 data_alloc: 184549376 data_used: 13426688
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136601600 unmapped: 33906688 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 166 ms_handle_reset con 0x557212256000 session 0x557210704f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 166 ms_handle_reset con 0x557210890400 session 0x5572120fcf00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 166 ms_handle_reset con 0x55720f447400 session 0x5572120fd0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136994816 unmapped: 33513472 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 ms_handle_reset con 0x55720f447400 session 0x55721213a3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 ms_handle_reset con 0x557210890400 session 0x55720fdbc5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 ms_handle_reset con 0x557211f67400 session 0x557214b190e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136282112 unmapped: 34226176 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 ms_handle_reset con 0x557212256000 session 0x557211becb40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x557212191800 session 0x557211d9ad20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x5572136b3800 session 0x55721239d860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x55720f447800 session 0x557213490d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136306688 unmapped: 34201600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 heartbeat osd_stat(store_statfs(0x1b5f70000/0x0/0x1bfc00000, data 0x3c6373f/0x3d7c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x55720f447400 session 0x55720f243c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136331264 unmapped: 34177024 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x5572133a2000 session 0x557214b194a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 ms_handle_reset con 0x557211f67400 session 0x557211f872c0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1720531 data_alloc: 184549376 data_used: 13438976
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 ms_handle_reset con 0x557212191800 session 0x5572106dad20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 ms_handle_reset con 0x5572137f1400 session 0x557214b19a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 ms_handle_reset con 0x55720f447400 session 0x55720fdbc1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 ms_handle_reset con 0x55720f447800 session 0x557214b18b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136364032 unmapped: 34144256 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 170 ms_handle_reset con 0x557212256000 session 0x557211be3c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 170 ms_handle_reset con 0x557212256000 session 0x5572106f8960
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136396800 unmapped: 34111488 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 171 ms_handle_reset con 0x55720f447400 session 0x5572118c94a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 171 ms_handle_reset con 0x55720f447800 session 0x557214b18960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137428992 unmapped: 33079296 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 172 ms_handle_reset con 0x5572137f1400 session 0x557214b185a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 172 ms_handle_reset con 0x557212191800 session 0x557211d9a780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.790033340s of 10.016474724s, submitted: 316
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 ms_handle_reset con 0x557212191800 session 0x5572106fbe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137445376 unmapped: 33062912 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 heartbeat osd_stat(store_statfs(0x1b6080000/0x0/0x1bfc00000, data 0x3b490f5/0x3c6d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 47
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 ms_handle_reset con 0x55720f447400 session 0x5572108fb2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137617408 unmapped: 32890880 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1746871 data_alloc: 184549376 data_used: 13463552
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 ms_handle_reset con 0x557212256000 session 0x557214b18000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 174 ms_handle_reset con 0x55720f447800 session 0x5572108fad20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137641984 unmapped: 32866304 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 ms_handle_reset con 0x5572136b3800 session 0x5572108fa1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 ms_handle_reset con 0x5572133a2000 session 0x5572134914a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 ms_handle_reset con 0x55720f447400 session 0x5572106fe780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137707520 unmapped: 32800768 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 ms_handle_reset con 0x55720f447800 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 176 ms_handle_reset con 0x5572137f1400 session 0x557214b18780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137740288 unmapped: 32768000 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 177 ms_handle_reset con 0x557212191800 session 0x5572134905a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 177 heartbeat osd_stat(store_statfs(0x1b703c000/0x0/0x1bfc00000, data 0x3b61b28/0x3c8e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137756672 unmapped: 32751616 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137846784 unmapped: 32661504 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 178 heartbeat osd_stat(store_statfs(0x1b703d000/0x0/0x1bfc00000, data 0x3b63d05/0x3c90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1761637 data_alloc: 184549376 data_used: 13475840
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137871360 unmapped: 32636928 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 179 ms_handle_reset con 0x55720f447400 session 0x5572108fab40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 179 ms_handle_reset con 0x557210891800 session 0x55721383b860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137879552 unmapped: 32628736 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 179 heartbeat osd_stat(store_statfs(0x1b7031000/0x0/0x1bfc00000, data 0x3b6d3b7/0x3c9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 ms_handle_reset con 0x5572107e0c00 session 0x5572120fd4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137953280 unmapped: 32555008 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.402379036s of 10.012425423s, submitted: 223
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 ms_handle_reset con 0x557212191800 session 0x557212ed9a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 ms_handle_reset con 0x55720f447800 session 0x5572134910e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 ms_handle_reset con 0x5572133a2000 session 0x5572106cda40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137265152 unmapped: 33243136 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 ms_handle_reset con 0x55720f447400 session 0x557210704f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 137265152 unmapped: 33243136 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1584908 data_alloc: 184549376 data_used: 11436032
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 heartbeat osd_stat(store_statfs(0x1b890d000/0x0/0x1bfc00000, data 0x228df7e/0x23be000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 182 ms_handle_reset con 0x5572107e0c00 session 0x557214cb4000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136142848 unmapped: 34365440 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 182 heartbeat osd_stat(store_statfs(0x1b890b000/0x0/0x1bfc00000, data 0x2290356/0x23c2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 136142848 unmapped: 34365440 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 182 ms_handle_reset con 0x557210891800 session 0x557214cb4d20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 183 ms_handle_reset con 0x557212191800 session 0x557214cb54a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138264576 unmapped: 32243712 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 183 ms_handle_reset con 0x5572107e0c00 session 0x557211d9ba40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 183 ms_handle_reset con 0x557210891800 session 0x557214b183c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138313728 unmapped: 32194560 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138346496 unmapped: 32161792 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1589067 data_alloc: 184549376 data_used: 11448320
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138051584 unmapped: 32456704 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 ms_handle_reset con 0x55720f447400 session 0x557212ed9c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 handle_osd_map epochs [184,186], i have 186, src has [1,186]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 ms_handle_reset con 0x5572133a2000 session 0x55720f98ef00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138067968 unmapped: 32440320 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b7753000/0x0/0x1bfc00000, data 0x22a42d3/0x23da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138100736 unmapped: 32407552 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.640137672s of 10.254094124s, submitted: 202
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1599482 data_alloc: 184549376 data_used: 11448320
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b7745000/0x0/0x1bfc00000, data 0x22b1017/0x23e9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b7735000/0x0/0x1bfc00000, data 0x22be360/0x23f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1601842 data_alloc: 184549376 data_used: 11460608
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138125312 unmapped: 32382976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138141696 unmapped: 32366592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138141696 unmapped: 32366592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b7729000/0x0/0x1bfc00000, data 0x22cb325/0x2405000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b7729000/0x0/0x1bfc00000, data 0x22cb325/0x2405000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138141696 unmapped: 32366592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.113000870s of 10.291820526s, submitted: 48
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 ms_handle_reset con 0x5572137f1400 session 0x5572150e6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138141696 unmapped: 32366592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1602584 data_alloc: 184549376 data_used: 11460608
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138149888 unmapped: 32358400 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 ms_handle_reset con 0x55720f447400 session 0x5572134630e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138149888 unmapped: 32358400 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b771c000/0x0/0x1bfc00000, data 0x22d8d0e/0x2412000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 ms_handle_reset con 0x5572107e0c00 session 0x5572108fb680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 ms_handle_reset con 0x557210891800 session 0x557211d9dc20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138559488 unmapped: 31948800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 ms_handle_reset con 0x5572133a2000 session 0x557213490f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138592256 unmapped: 31916032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138592256 unmapped: 31916032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b6ec5000/0x0/0x1bfc00000, data 0x2b2ead5/0x2c69000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1675219 data_alloc: 184549376 data_used: 11460608
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138592256 unmapped: 31916032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x5572137f1400 session 0x557213462780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138616832 unmapped: 31891456 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x55720f447400 session 0x5572150e70e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x5572107e0c00 session 0x557210c66d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138633216 unmapped: 31875072 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x557210891800 session 0x557210c66f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x5572133a2000 session 0x557211c33a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138641408 unmapped: 31866880 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.463020325s of 10.987442017s, submitted: 116
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 ms_handle_reset con 0x557212256000 session 0x557211bf72c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138641408 unmapped: 31866880 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1683947 data_alloc: 184549376 data_used: 11472896
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b6eb5000/0x0/0x1bfc00000, data 0x2b3c628/0x2c79000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138641408 unmapped: 31866880 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 138674176 unmapped: 31834112 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572107e0c00 session 0x5572134625a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x55720f447400 session 0x557211d9b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b6ea6000/0x0/0x1bfc00000, data 0x2b46dfc/0x2c87000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139722752 unmapped: 30785536 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x557210891800 session 0x557213491860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a2000 session 0x557211e01c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139337728 unmapped: 31170560 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a1c00 session 0x557210c674a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x55720f447400 session 0x557214cb5860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572107e0c00 session 0x557211beda40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139345920 unmapped: 31162368 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b6ea4000/0x0/0x1bfc00000, data 0x2b4a0c9/0x2c8a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1691378 data_alloc: 184549376 data_used: 11493376
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x557210891800 session 0x557210bef680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139354112 unmapped: 31154176 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a1c00 session 0x5572150e6d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139362304 unmapped: 31145984 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a2000 session 0x557214b185a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b6e99000/0x0/0x1bfc00000, data 0x2b52c60/0x2c95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 139362304 unmapped: 31145984 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x55720f447400 session 0x557214b19860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572107e0c00 session 0x557212ed9a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140476416 unmapped: 30031872 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x557210891800 session 0x557212ed9e00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140476416 unmapped: 30031872 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1694808 data_alloc: 184549376 data_used: 11493376
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.271784782s of 10.941567421s, submitted: 97
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140500992 unmapped: 30007296 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a1c00 session 0x557212ed8f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572133a2000 session 0x557211e01a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140517376 unmapped: 29990912 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x55720f447400 session 0x557211d9dc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 ms_handle_reset con 0x5572107e0c00 session 0x5572108fb2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140525568 unmapped: 29982720 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 ms_handle_reset con 0x5572133a1c00 session 0x5572106f9e00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 ms_handle_reset con 0x557210891800 session 0x5572108fab40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 heartbeat osd_stat(store_statfs(0x1b6e73000/0x0/0x1bfc00000, data 0x2b74edc/0x2cba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 ms_handle_reset con 0x5572133a2000 session 0x55721211f4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140558336 unmapped: 29949952 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 ms_handle_reset con 0x55720f447400 session 0x557214b19a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140550144 unmapped: 29958144 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1706786 data_alloc: 184549376 data_used: 11522048
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 ms_handle_reset con 0x5572133a2000 session 0x5572108fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140558336 unmapped: 29949952 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 ms_handle_reset con 0x5572107e0c00 session 0x5572134630e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 ms_handle_reset con 0x557210891800 session 0x5572150e6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 ms_handle_reset con 0x5572133a1c00 session 0x557214b18b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 140558336 unmapped: 29949952 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 ms_handle_reset con 0x5572133a1c00 session 0x557211d9a780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141623296 unmapped: 28884992 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 heartbeat osd_stat(store_statfs(0x1b6e53000/0x0/0x1bfc00000, data 0x2b91ca6/0x2cd9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 193 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141664256 unmapped: 28844032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 ms_handle_reset con 0x55720f447400 session 0x557214cb4000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 ms_handle_reset con 0x5572107e0c00 session 0x557214cb4d20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141664256 unmapped: 28844032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1721912 data_alloc: 184549376 data_used: 11522048
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 ms_handle_reset con 0x557210891800 session 0x557214cb54a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141664256 unmapped: 28844032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 ms_handle_reset con 0x5572133a2000 session 0x5572106cda40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.321638107s of 10.726172447s, submitted: 162
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 ms_handle_reset con 0x55720f447400 session 0x55721383a000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 ms_handle_reset con 0x5572107e0c00 session 0x5572134914a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141737984 unmapped: 28770304 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141803520 unmapped: 28704768 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 ms_handle_reset con 0x557210891800 session 0x557210c66960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b6e34000/0x0/0x1bfc00000, data 0x2baa1ac/0x2cf8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141811712 unmapped: 28696576 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 ms_handle_reset con 0x5572133a1c00 session 0x5572106fe780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 ms_handle_reset con 0x55721217f000 session 0x5572106fbe00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141860864 unmapped: 28647424 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1732726 data_alloc: 184549376 data_used: 11522048
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 199 ms_handle_reset con 0x5572107e0c00 session 0x5572118c9860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141901824 unmapped: 28606464 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 200 ms_handle_reset con 0x55720f447400 session 0x5572118c83c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141893632 unmapped: 28614656 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 200 ms_handle_reset con 0x557210891800 session 0x5572108fbc20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 200 heartbeat osd_stat(store_statfs(0x1b6e1d000/0x0/0x1bfc00000, data 0x2bb9313/0x2d0f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 201 heartbeat osd_stat(store_statfs(0x1b6e0e000/0x0/0x1bfc00000, data 0x2bc6ab7/0x2d1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 141910016 unmapped: 28598272 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 201 ms_handle_reset con 0x557212193800 session 0x557213462b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 201 ms_handle_reset con 0x5572137f0c00 session 0x557211bec1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 142991360 unmapped: 27516928 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 202 heartbeat osd_stat(store_statfs(0x1b6e0f000/0x0/0x1bfc00000, data 0x2bc6a55/0x2d1e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 202 ms_handle_reset con 0x5572107e0c00 session 0x55721213be00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 202 ms_handle_reset con 0x557212193800 session 0x55721211f680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143024128 unmapped: 27484160 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1756554 data_alloc: 184549376 data_used: 11534336
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 ms_handle_reset con 0x557210891800 session 0x557211becb40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143073280 unmapped: 27435008 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 ms_handle_reset con 0x557212232400 session 0x5572150e6000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 ms_handle_reset con 0x55721228c400 session 0x5572106daf00
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.281045914s of 10.006721497s, submitted: 219
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 203 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 ms_handle_reset con 0x5572121b1c00 session 0x557211bec960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 ms_handle_reset con 0x5572107e0c00 session 0x55720fdd6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 ms_handle_reset con 0x557210891800 session 0x557211be3c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143138816 unmapped: 27369472 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 ms_handle_reset con 0x557212232400 session 0x557210c67e00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 204 ms_handle_reset con 0x557212193800 session 0x55720fdbc5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 ms_handle_reset con 0x5572107e0400 session 0x55721383b0e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 heartbeat osd_stat(store_statfs(0x1b6db6000/0x0/0x1bfc00000, data 0x2c166be/0x2d74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143286272 unmapped: 27222016 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 206 ms_handle_reset con 0x5572107e0c00 session 0x5572106dad20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143523840 unmapped: 26984448 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143581184 unmapped: 26927104 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1780347 data_alloc: 184549376 data_used: 11546624
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 207 ms_handle_reset con 0x557210891800 session 0x557211f872c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143589376 unmapped: 26918912 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 208 ms_handle_reset con 0x5572121b1c00 session 0x55721239d860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 143613952 unmapped: 26894336 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 208 handle_osd_map epochs [207,208], i have 208, src has [1,208]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 208 ms_handle_reset con 0x557212232400 session 0x55721213a780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 heartbeat osd_stat(store_statfs(0x1b6978000/0x0/0x1bfc00000, data 0x2c4dd32/0x2db5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144703488 unmapped: 25804800 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 ms_handle_reset con 0x5572107e0400 session 0x55721372f860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 ms_handle_reset con 0x5572107e0c00 session 0x557211e00b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144736256 unmapped: 25772032 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144220160 unmapped: 26288128 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1799159 data_alloc: 184549376 data_used: 11546624
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144285696 unmapped: 26222592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.026152611s of 10.061644554s, submitted: 307
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144293888 unmapped: 26214400 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 210 heartbeat osd_stat(store_statfs(0x1b68d2000/0x0/0x1bfc00000, data 0x2cf1ee4/0x2e5b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144719872 unmapped: 25788416 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 48
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 heartbeat osd_stat(store_statfs(0x1b68c4000/0x0/0x1bfc00000, data 0x2cfc346/0x2e68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144269312 unmapped: 26238976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145399808 unmapped: 25108480 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1825695 data_alloc: 184549376 data_used: 11558912
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 ms_handle_reset con 0x55721339e000 session 0x557210befe00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145326080 unmapped: 25182208 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 ms_handle_reset con 0x557211f67800 session 0x5572150e6780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 ms_handle_reset con 0x557212192c00 session 0x557211bed4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145457152 unmapped: 25051136 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145604608 unmapped: 24903680 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 144973824 unmapped: 25534464 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 heartbeat osd_stat(store_statfs(0x1b6807000/0x0/0x1bfc00000, data 0x2dba222/0x2f26000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145104896 unmapped: 25403392 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1829643 data_alloc: 184549376 data_used: 11571200
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 ms_handle_reset con 0x5572107e0400 session 0x5572150e6f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145104896 unmapped: 25403392 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 ms_handle_reset con 0x5572107e0c00 session 0x5572106fa960
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.212757111s of 10.002793312s, submitted: 499
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 heartbeat osd_stat(store_statfs(0x1b6806000/0x0/0x1bfc00000, data 0x2dba232/0x2f27000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 49
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 ms_handle_reset con 0x557211f67800 session 0x55721372f680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145121280 unmapped: 25387008 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x55721339e000 session 0x557211f87a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 146309120 unmapped: 24199168 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 146456576 unmapped: 24051712 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x557212191400 session 0x557211d9cb40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145965056 unmapped: 24543232 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x5572107e0400 session 0x557211bdf2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1836630 data_alloc: 184549376 data_used: 11583488
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x5572107e0c00 session 0x55721372f860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145973248 unmapped: 24535040 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x557211f67800 session 0x55721239d860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 heartbeat osd_stat(store_statfs(0x1b679e000/0x0/0x1bfc00000, data 0x2e21d51/0x2f90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 ms_handle_reset con 0x55721339e000 session 0x5572106fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 145973248 unmapped: 24535040 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557212193000 session 0x557211d9d680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557210892000 session 0x55721372f4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557212193000 session 0x55721383a3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148094976 unmapped: 22413312 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148103168 unmapped: 22405120 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148103168 unmapped: 22405120 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1850615 data_alloc: 184549376 data_used: 11595776
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 heartbeat osd_stat(store_statfs(0x1b6772000/0x0/0x1bfc00000, data 0x2e4b523/0x2fbb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148226048 unmapped: 22282240 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.847445488s of 10.313969612s, submitted: 152
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148324352 unmapped: 22183936 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148324352 unmapped: 22183936 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148430848 unmapped: 22077440 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x5572107e0400 session 0x55720fdd6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148430848 unmapped: 22077440 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1853585 data_alloc: 184549376 data_used: 11595776
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x5572107e0c00 session 0x557211bec960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557211f67800 session 0x557211c32960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 heartbeat osd_stat(store_statfs(0x1b674e000/0x0/0x1bfc00000, data 0x2e6def7/0x2fe0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x5572107e0400 session 0x5572150e6000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x5572107e0c00 session 0x5572150e7680
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148439040 unmapped: 22069248 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148439040 unmapped: 22069248 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557210892000 session 0x55721213be00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x557211f67800 session 0x55721383be00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 ms_handle_reset con 0x55721339e000 session 0x55720fdd6000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149618688 unmapped: 20889600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 215 handle_osd_map epochs [215,215], i have 215, src has [1,215]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 215 ms_handle_reset con 0x5572107e0c00 session 0x55720fdd63c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 215 ms_handle_reset con 0x557210892000 session 0x55720fdbcd20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149725184 unmapped: 20783104 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 215 heartbeat osd_stat(store_statfs(0x1b66ea000/0x0/0x1bfc00000, data 0x2ed0530/0x3044000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x557211f67800 session 0x55720fdd61e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x557212193000 session 0x55720fdd7e00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 148922368 unmapped: 21585920 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1871003 data_alloc: 184549376 data_used: 11612160
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x5572136b3000 session 0x557212ed8b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x557210892000 session 0x557213490b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x5572107e0c00 session 0x55721211fa40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149037056 unmapped: 21471232 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149037056 unmapped: 21471232 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.845459938s of 10.281464577s, submitted: 95
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 ms_handle_reset con 0x557212193000 session 0x55720fdd6f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 217 ms_handle_reset con 0x557211f67800 session 0x5572106fe780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149200896 unmapped: 21307392 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149299200 unmapped: 21209088 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 218 ms_handle_reset con 0x557211f66c00 session 0x5572106f9c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 218 heartbeat osd_stat(store_statfs(0x1b66a2000/0x0/0x1bfc00000, data 0x2f0c7cd/0x3088000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 ms_handle_reset con 0x557211f66400 session 0x557212ed8b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 ms_handle_reset con 0x55721217fc00 session 0x557211d9d4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 ms_handle_reset con 0x557212257000 session 0x557214cb4000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149372928 unmapped: 21135360 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1908950 data_alloc: 184549376 data_used: 11673600
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 ms_handle_reset con 0x5572107e0c00 session 0x55720fdd7e00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 149487616 unmapped: 21020672 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 ms_handle_reset con 0x557210892000 session 0x55721211eb40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 ms_handle_reset con 0x557211f67800 session 0x557210bef680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 ms_handle_reset con 0x5572107e0c00 session 0x557211bede00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 150642688 unmapped: 19865600 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 ms_handle_reset con 0x557210892000 session 0x557211c32960
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 heartbeat osd_stat(store_statfs(0x1b6674000/0x0/0x1bfc00000, data 0x2f38a80/0x30b9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 ms_handle_reset con 0x557211f66400 session 0x557210c66000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 ms_handle_reset con 0x557212257000 session 0x557213491860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 ms_handle_reset con 0x55721217fc00 session 0x55721383a3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 150937600 unmapped: 19570688 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 ms_handle_reset con 0x5572107e0c00 session 0x557211c33a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151306240 unmapped: 19202048 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 ms_handle_reset con 0x557211f66400 session 0x557213462b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 222 ms_handle_reset con 0x557210892000 session 0x55721372f4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151298048 unmapped: 19210240 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 222 ms_handle_reset con 0x557211f67800 session 0x5572106fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1904885 data_alloc: 184549376 data_used: 11689984
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 ms_handle_reset con 0x557211f67800 session 0x55721372f860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 ms_handle_reset con 0x5572107e0c00 session 0x557211d9cb40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151314432 unmapped: 19193856 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 heartbeat osd_stat(store_statfs(0x1b6623000/0x0/0x1bfc00000, data 0x2f8fba1/0x310a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.661293983s of 10.001761436s, submitted: 357
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151314432 unmapped: 19193856 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 ms_handle_reset con 0x557211f66400 session 0x55721239d860
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 ms_handle_reset con 0x557210892000 session 0x557211f87a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 ms_handle_reset con 0x557212193000 session 0x55721211f4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 ms_handle_reset con 0x5572107e0c00 session 0x557213462780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151388160 unmapped: 19120128 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 ms_handle_reset con 0x557210892000 session 0x5572134625a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 handle_osd_map epochs [224,225], i have 225, src has [1,225]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 ms_handle_reset con 0x557211f66400 session 0x5572108fab40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 ms_handle_reset con 0x557211f67800 session 0x5572108fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 ms_handle_reset con 0x557212193000 session 0x5572107041e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151437312 unmapped: 19070976 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 heartbeat osd_stat(store_statfs(0x1b6612000/0x0/0x1bfc00000, data 0x2f97461/0x311a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 ms_handle_reset con 0x5572107e0c00 session 0x557211f14780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 ms_handle_reset con 0x55721217fc00 session 0x5572150e6f00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151453696 unmapped: 19054592 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1929236 data_alloc: 184549376 data_used: 11702272
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 heartbeat osd_stat(store_statfs(0x1b660d000/0x0/0x1bfc00000, data 0x2f99d4e/0x3120000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151461888 unmapped: 19046400 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 ms_handle_reset con 0x557210892000 session 0x5572150e6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 ms_handle_reset con 0x557211f67800 session 0x55720f243c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 ms_handle_reset con 0x55721339f400 session 0x5572106fb4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151560192 unmapped: 18948096 heap: 170508288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 heartbeat osd_stat(store_statfs(0x1b6608000/0x0/0x1bfc00000, data 0x2f9c0c4/0x3123000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 227 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 151584768 unmapped: 27320320 heap: 178905088 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 ms_handle_reset con 0x557210892000 session 0x5572106da780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 ms_handle_reset con 0x5572107e0c00 session 0x557211d9da40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 152133632 unmapped: 26771456 heap: 178905088 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 ms_handle_reset con 0x55721217fc00 session 0x557213490b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 ms_handle_reset con 0x557211f67800 session 0x5572108fb680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 ms_handle_reset con 0x557211cae800 session 0x557214b18000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 160546816 unmapped: 18358272 heap: 178905088 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2213820 data_alloc: 184549376 data_used: 11714560
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 ms_handle_reset con 0x5572107e0c00 session 0x55721372ef00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 152346624 unmapped: 26558464 heap: 178905088 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 ms_handle_reset con 0x557210892000 session 0x55721372ed20
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.159320831s of 10.013262749s, submitted: 228
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 152494080 unmapped: 26411008 heap: 178905088 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 ms_handle_reset con 0x557211cae800 session 0x55721372eb40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b35bf000/0x0/0x1bfc00000, data 0x5fe4e80/0x616f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,1,0,0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153526272 unmapped: 33775616 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153403392 unmapped: 33898496 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153403392 unmapped: 33898496 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2493299 data_alloc: 184549376 data_used: 11726848
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153337856 unmapped: 33964032 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 ms_handle_reset con 0x557211f67800 session 0x55721372e780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b0da8000/0x0/0x1bfc00000, data 0x87fa0d1/0x8986000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153337856 unmapped: 33964032 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161734656 unmapped: 25567232 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 232 ms_handle_reset con 0x557212180000 session 0x55721213bc20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153149440 unmapped: 34152448 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 233 ms_handle_reset con 0x557210892000 session 0x5572106f85a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153174016 unmapped: 34127872 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2898640 data_alloc: 184549376 data_used: 11739136
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 234 ms_handle_reset con 0x557211cae800 session 0x5572150e7c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 234 ms_handle_reset con 0x55721217fc00 session 0x55721372e5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161595392 unmapped: 25706496 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.315048218s of 10.003724098s, submitted: 153
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161751040 unmapped: 25550848 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 234 ms_handle_reset con 0x557211f67800 session 0x557215ae4000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 235 heartbeat osd_stat(store_statfs(0x1ad146000/0x0/0x1bfc00000, data 0xc051996/0xc1e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 235 heartbeat osd_stat(store_statfs(0x1ab924000/0x0/0x1bfc00000, data 0xd872abb/0xda09000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153378816 unmapped: 33923072 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 236 ms_handle_reset con 0x557212193400 session 0x557215ae41e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161947648 unmapped: 25354240 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 236 ms_handle_reset con 0x557211cae800 session 0x557210c67a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 ms_handle_reset con 0x557210892000 session 0x557215ae43c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 heartbeat osd_stat(store_statfs(0x1a910c000/0x0/0x1bfc00000, data 0x10086524/0x1021f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 ms_handle_reset con 0x557211f67800 session 0x557214cb52c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 heartbeat osd_stat(store_statfs(0x1a910c000/0x0/0x1bfc00000, data 0x10086524/0x1021f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 ms_handle_reset con 0x55721217fc00 session 0x557215ae4780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 153624576 unmapped: 33677312 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3461971 data_alloc: 184549376 data_used: 11751424
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 heartbeat osd_stat(store_statfs(0x1a810d000/0x0/0x1bfc00000, data 0x11086513/0x1121e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 154828800 unmapped: 32473088 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 heartbeat osd_stat(store_statfs(0x1a78ed000/0x0/0x1bfc00000, data 0x118a9343/0x11a40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 154828800 unmapped: 32473088 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 155058176 unmapped: 32243712 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 ms_handle_reset con 0x557212233c00 session 0x557215ae4b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 155115520 unmapped: 32186368 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 ms_handle_reset con 0x557210892000 session 0x557215ae4f00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 heartbeat osd_stat(store_statfs(0x1a5895000/0x0/0x1bfc00000, data 0x138fc491/0x13a97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 155123712 unmapped: 32178176 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3806569 data_alloc: 184549376 data_used: 11763712
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 ms_handle_reset con 0x557211cae800 session 0x557215ae50e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 155222016 unmapped: 32079872 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 ms_handle_reset con 0x557211f67800 session 0x557215ae52c0
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.940221786s of 10.003693581s, submitted: 232
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 ms_handle_reset con 0x55721217fc00 session 0x557215ae5680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 155140096 unmapped: 32161792 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 ms_handle_reset con 0x557212233c00 session 0x557215ae5860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 163561472 unmapped: 23740416 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 ms_handle_reset con 0x557210892000 session 0x557215ae5c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 156237824 unmapped: 31064064 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x14936800/0x14ad5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 156286976 unmapped: 31014912 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3981703 data_alloc: 184549376 data_used: 11776000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 240 ms_handle_reset con 0x557211cae800 session 0x557211d76000
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157442048 unmapped: 29859840 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 heartbeat osd_stat(store_statfs(0x1a37f2000/0x0/0x1bfc00000, data 0x1599715e/0x15b39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157466624 unmapped: 29835264 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x557211f67800 session 0x557211d77a40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157466624 unmapped: 29835264 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 heartbeat osd_stat(store_statfs(0x1a2fd9000/0x0/0x1bfc00000, data 0x161b29ba/0x16355000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x55721217fc00 session 0x557211d77c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x557212233c00 session 0x557211d77e00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157573120 unmapped: 29728768 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x557210892000 session 0x557214b261e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157589504 unmapped: 29712384 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4319686 data_alloc: 184549376 data_used: 11788288
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x557211cae800 session 0x557214b26780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157663232 unmapped: 29638656 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 heartbeat osd_stat(store_statfs(0x1a0f76000/0x0/0x1bfc00000, data 0x18216b0a/0x183b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x557211f67800 session 0x557214b26b40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 ms_handle_reset con 0x55721217fc00 session 0x557214b270e0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.295218468s of 10.156924248s, submitted: 199
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157696000 unmapped: 29605888 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157745152 unmapped: 29556736 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 157884416 unmapped: 29417472 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158351360 unmapped: 28950528 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 ms_handle_reset con 0x557212233c00 session 0x557214b27860
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4610410 data_alloc: 184549376 data_used: 11800576
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 heartbeat osd_stat(store_statfs(0x19e718000/0x0/0x1bfc00000, data 0x1aa73351/0x1ac16000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158441472 unmapped: 28860416 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 166838272 unmapped: 20463616 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 166838272 unmapped: 20463616 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 ms_handle_reset con 0x557210892000 session 0x557214b27a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 ms_handle_reset con 0x557211cae800 session 0x557214b27c20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158507008 unmapped: 28794880 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 heartbeat osd_stat(store_statfs(0x19cece000/0x0/0x1bfc00000, data 0x1c2bed7c/0x1c460000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,4,0,4,4])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 ms_handle_reset con 0x557211f67800 session 0x557213bb4000
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 ms_handle_reset con 0x557211f66400 session 0x5572106fe5a0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 ms_handle_reset con 0x55721217fc00 session 0x557211bf72c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 160612352 unmapped: 26689536 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4882332 data_alloc: 184549376 data_used: 11812864
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 heartbeat osd_stat(store_statfs(0x19c395000/0x0/0x1bfc00000, data 0x1cdf2a45/0x1cf98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 ms_handle_reset con 0x557210892000 session 0x557213bb50e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 160661504 unmapped: 26640384 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.882308006s of 10.006781578s, submitted: 261
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557211cae800 session 0x557213bb52c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 160178176 unmapped: 27123712 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x55721217fc00 session 0x557215ae52c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557211f66400 session 0x557213bb5a40
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557211f67800 session 0x55721239d680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557211f67800 session 0x55721383b4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159301632 unmapped: 28000256 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557210892000 session 0x557211f86b40
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 50
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 ms_handle_reset con 0x557211cae800 session 0x55721383a1e0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159547392 unmapped: 27754496 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 heartbeat osd_stat(store_statfs(0x1b5d44000/0x0/0x1bfc00000, data 0x332fe95/0x34d4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159547392 unmapped: 27754496 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2118110 data_alloc: 184549376 data_used: 11812864
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158662656 unmapped: 28639232 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158662656 unmapped: 28639232 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 ms_handle_reset con 0x557211f66400 session 0x55721213af00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158662656 unmapped: 28639232 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 ms_handle_reset con 0x55721217fc00 session 0x5572118c9e00
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b4c82000/0x0/0x1bfc00000, data 0x33620e2/0x3509000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158777344 unmapped: 28524544 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158785536 unmapped: 28516352 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 ms_handle_reset con 0x55721217fc00 session 0x5572106fe3c0
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2138364 data_alloc: 184549376 data_used: 11825152
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 158777344 unmapped: 28524544 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.194044113s of 10.076568604s, submitted: 473
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159916032 unmapped: 27385856 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 ms_handle_reset con 0x557210892000 session 0x55721213b680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 ms_handle_reset con 0x557211f67800 session 0x55721372ef00
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159924224 unmapped: 27377664 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 heartbeat osd_stat(store_statfs(0x1b47b7000/0x0/0x1bfc00000, data 0x382954c/0x39d6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 ms_handle_reset con 0x557212233c00 session 0x557211d76d20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x557211f66400 session 0x557211beda40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159825920 unmapped: 27475968 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x557210892000 session 0x55720f243c20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x557211f67800 session 0x5572150e6b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159924224 unmapped: 27377664 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2150333 data_alloc: 184549376 data_used: 11841536
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x55721217fc00 session 0x557211f14780
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b4be3000/0x0/0x1bfc00000, data 0x33fecb0/0x35ab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x557212233c00 session 0x5572108fba40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159940608 unmapped: 27361280 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159940608 unmapped: 27361280 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 159981568 unmapped: 27320320 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b4b8d000/0x0/0x1bfc00000, data 0x3457980/0x3601000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161185792 unmapped: 26116096 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161185792 unmapped: 26116096 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2160779 data_alloc: 184549376 data_used: 11837440
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 ms_handle_reset con 0x55721225ac00 session 0x5572108fad20
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161325056 unmapped: 25976832 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.277284622s of 10.002829552s, submitted: 192
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161357824 unmapped: 25944064 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 249 ms_handle_reset con 0x557210892000 session 0x55721211f4a0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161529856 unmapped: 25772032 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 249 heartbeat osd_stat(store_statfs(0x1b4af0000/0x0/0x1bfc00000, data 0x34ec665/0x369b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161529856 unmapped: 25772032 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 249 ms_handle_reset con 0x557211f67800 session 0x557211bed2c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161538048 unmapped: 25763840 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 ms_handle_reset con 0x55721217fc00 session 0x557210c66000
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2175051 data_alloc: 184549376 data_used: 11862016
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b4aee000/0x0/0x1bfc00000, data 0x34eea21/0x369f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 161669120 unmapped: 25632768 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 ms_handle_reset con 0x557212233c00 session 0x557214cb43c0
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 ms_handle_reset con 0x55721403f800 session 0x557212ed8b40
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162791424 unmapped: 24510464 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162799616 unmapped: 24502272 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b469e000/0x0/0x1bfc00000, data 0x3540413/0x36f0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162799616 unmapped: 24502272 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162799616 unmapped: 24502272 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2187495 data_alloc: 184549376 data_used: 11862016
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162799616 unmapped: 24502272 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.358329773s of 10.003405571s, submitted: 156
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 250 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 162807808 unmapped: 24494080 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b4656000/0x0/0x1bfc00000, data 0x35859d0/0x3735000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 164847616 unmapped: 22454272 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 164978688 unmapped: 22323200 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 164978688 unmapped: 22323200 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b3452000/0x0/0x1bfc00000, data 0x35e807e/0x379a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2201837 data_alloc: 184549376 data_used: 11874304
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 165232640 unmapped: 22069248 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b3405000/0x0/0x1bfc00000, data 0x3635ca0/0x37e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 165273600 unmapped: 22028288 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b33e7000/0x0/0x1bfc00000, data 0x36532ed/0x3806000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 165298176 unmapped: 22003712 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 165543936 unmapped: 21757952 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 166723584 unmapped: 20578304 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b33b1000/0x0/0x1bfc00000, data 0x36876f1/0x383b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2212151 data_alloc: 184549376 data_used: 11890688
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 166723584 unmapped: 20578304 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.157174110s of 10.001698494s, submitted: 225
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b3382000/0x0/0x1bfc00000, data 0x36b6744/0x386c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 166141952 unmapped: 21159936 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167198720 unmapped: 20103168 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167346176 unmapped: 19955712 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b334d000/0x0/0x1bfc00000, data 0x36ec726/0x38a0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167559168 unmapped: 19742720 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2212705 data_alloc: 184549376 data_used: 11890688
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167690240 unmapped: 19611648 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167698432 unmapped: 19603456 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 168992768 unmapped: 18309120 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b32cd000/0x0/0x1bfc00000, data 0x3769d88/0x3920000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b32cd000/0x0/0x1bfc00000, data 0x3769d88/0x3920000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 168517632 unmapped: 18784256 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b32aa000/0x0/0x1bfc00000, data 0x378ce82/0x3943000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 168534016 unmapped: 18767872 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2236499 data_alloc: 184549376 data_used: 11902976
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b3281000/0x0/0x1bfc00000, data 0x37b5143/0x396b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167755776 unmapped: 19546112 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.504941940s of 10.004302025s, submitted: 145
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167854080 unmapped: 19447808 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 167862272 unmapped: 19439616 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b320d000/0x0/0x1bfc00000, data 0x3827d96/0x39df000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169312256 unmapped: 17989632 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169312256 unmapped: 17989632 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2242381 data_alloc: 184549376 data_used: 11902976
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169328640 unmapped: 17973248 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169508864 unmapped: 17793024 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169623552 unmapped: 17678336 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 255 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b3175000/0x0/0x1bfc00000, data 0x38be3e8/0x3a78000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169697280 unmapped: 17604608 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 169992192 unmapped: 17309696 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2260217 data_alloc: 184549376 data_used: 11915264
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 171139072 unmapped: 16162816 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 171139072 unmapped: 16162816 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.837244034s of 10.269122124s, submitted: 120
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 171319296 unmapped: 15982592 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 171417600 unmapped: 15884288 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 256 heartbeat osd_stat(store_statfs(0x1b30a8000/0x0/0x1bfc00000, data 0x398ac0e/0x3b44000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 171417600 unmapped: 15884288 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2272447 data_alloc: 184549376 data_used: 11939840
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 170852352 unmapped: 16449536 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 170868736 unmapped: 16433152 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 172064768 unmapped: 15237120 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 172310528 unmapped: 14991360 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 256 heartbeat osd_stat(store_statfs(0x1b3014000/0x0/0x1bfc00000, data 0x3a1c62b/0x3bd8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 172351488 unmapped: 14950400 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2281129 data_alloc: 184549376 data_used: 11939840
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 172449792 unmapped: 14852096 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 172449792 unmapped: 14852096 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.890509605s of 10.464740753s, submitted: 151
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 173498368 unmapped: 13803520 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 173498368 unmapped: 13803520 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2f79000/0x0/0x1bfc00000, data 0x3ab5801/0x3c72000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [0,0,2])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 174686208 unmapped: 12615680 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2295653 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2f25000/0x0/0x1bfc00000, data 0x3b0d1eb/0x3cc9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 174727168 unmapped: 12574720 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 174899200 unmapped: 12402688 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 173932544 unmapped: 13369344 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 173932544 unmapped: 13369344 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175022080 unmapped: 12279808 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2309429 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175210496 unmapped: 12091392 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2e78000/0x0/0x1bfc00000, data 0x3bb669a/0x3d73000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175210496 unmapped: 12091392 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.838865280s of 10.272104263s, submitted: 104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2e55000/0x0/0x1bfc00000, data 0x3bdbbda/0x3d99000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175210496 unmapped: 12091392 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175423488 unmapped: 11878400 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175611904 unmapped: 11689984 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2316709 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 175611904 unmapped: 11689984 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176668672 unmapped: 10633216 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2da8000/0x0/0x1bfc00000, data 0x3c866ca/0x3e44000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176807936 unmapped: 10493952 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176807936 unmapped: 10493952 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176914432 unmapped: 10387456 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2326039 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176914432 unmapped: 10387456 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 176939008 unmapped: 10362880 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2d11000/0x0/0x1bfc00000, data 0x3d1cc2f/0x3edb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.574435234s of 10.000539780s, submitted: 109
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178208768 unmapped: 9093120 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178208768 unmapped: 9093120 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2ccb000/0x0/0x1bfc00000, data 0x3d63a28/0x3f22000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178405376 unmapped: 8896512 heap: 187301888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2346353 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178741248 unmapped: 9609216 heap: 188350464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2c4d000/0x0/0x1bfc00000, data 0x3de134f/0x3fa0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [0,0,0,2])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 177643520 unmapped: 10706944 heap: 188350464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 177823744 unmapped: 10526720 heap: 188350464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178257920 unmapped: 10092544 heap: 188350464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2bfe000/0x0/0x1bfc00000, data 0x3e30757/0x3fed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178544640 unmapped: 10854400 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2350799 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2bb9000/0x0/0x1bfc00000, data 0x3e77f76/0x4034000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178544640 unmapped: 10854400 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 178724864 unmapped: 10674176 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b70000/0x0/0x1bfc00000, data 0x3ec2948/0x407e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.846844673s of 10.357544899s, submitted: 128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 179978240 unmapped: 9420800 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 179978240 unmapped: 9420800 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180363264 unmapped: 9035776 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2355589 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180363264 unmapped: 9035776 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180363264 unmapped: 9035776 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180371456 unmapped: 9027584 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b11000/0x0/0x1bfc00000, data 0x3f1f2c7/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b11000/0x0/0x1bfc00000, data 0x3f1f2c7/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180371456 unmapped: 9027584 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180371456 unmapped: 9027584 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2353461 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180371456 unmapped: 9027584 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b13000/0x0/0x1bfc00000, data 0x3f1f266/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180371456 unmapped: 9027584 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b10000/0x0/0x1bfc00000, data 0x3f1f360/0x40dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.764917374s of 11.967329979s, submitted: 42
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2356963 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b12000/0x0/0x1bfc00000, data 0x3f1f291/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2357887 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b11000/0x0/0x1bfc00000, data 0x3f1f362/0x40dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180379648 unmapped: 9019392 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180387840 unmapped: 9011200 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180387840 unmapped: 9011200 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b12000/0x0/0x1bfc00000, data 0x3f1f266/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180387840 unmapped: 9011200 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2358451 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.702996254s of 10.860272408s, submitted: 32
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b13000/0x0/0x1bfc00000, data 0x3f1f297/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2358275 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b12000/0x0/0x1bfc00000, data 0x3f1f266/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b2b12000/0x0/0x1bfc00000, data 0x3f1f293/0x40db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2360043 data_alloc: 184549376 data_used: 11952128
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180404224 unmapped: 8994816 heap: 189399040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.702033997s of 10.799960136s, submitted: 20
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180436992 unmapped: 10010624 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180436992 unmapped: 10010624 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180436992 unmapped: 10010624 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 258 heartbeat osd_stat(store_statfs(0x1b2b0f000/0x0/0x1bfc00000, data 0x3f21664/0x40df000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180436992 unmapped: 10010624 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2363739 data_alloc: 184549376 data_used: 11964416
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 259 heartbeat osd_stat(store_statfs(0x1b2b09000/0x0/0x1bfc00000, data 0x3f239aa/0x40e5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2370549 data_alloc: 184549376 data_used: 11976704
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.845419884s of 10.004640579s, submitted: 71
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 260 heartbeat osd_stat(store_statfs(0x1b2b02000/0x0/0x1bfc00000, data 0x3f25dbc/0x40ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180445184 unmapped: 10002432 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 260 heartbeat osd_stat(store_statfs(0x1b2b02000/0x0/0x1bfc00000, data 0x3f25dbc/0x40ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180453376 unmapped: 9994240 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2376751 data_alloc: 184549376 data_used: 11988992
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180453376 unmapped: 9994240 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180510720 unmapped: 9936896 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180518912 unmapped: 9928704 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180518912 unmapped: 9928704 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b2b01000/0x0/0x1bfc00000, data 0x3f28195/0x40ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180518912 unmapped: 9928704 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2379419 data_alloc: 184549376 data_used: 12001280
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180518912 unmapped: 9928704 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.765604973s of 10.001043320s, submitted: 102
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b2b01000/0x0/0x1bfc00000, data 0x3f28195/0x40ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 261 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 262 heartbeat osd_stat(store_statfs(0x1b2afe000/0x0/0x1bfc00000, data 0x3f2a318/0x40ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2382615 data_alloc: 184549376 data_used: 12013568
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 262 heartbeat osd_stat(store_statfs(0x1b2aff000/0x0/0x1bfc00000, data 0x3f2a318/0x40ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181592064 unmapped: 8855552 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181592064 unmapped: 8855552 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 263 heartbeat osd_stat(store_statfs(0x1b26fa000/0x0/0x1bfc00000, data 0x3f2c6e9/0x40f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2384895 data_alloc: 184549376 data_used: 12025856
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180543488 unmapped: 9904128 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.783194542s of 10.001411438s, submitted: 76
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 263 heartbeat osd_stat(store_statfs(0x1b26fc000/0x0/0x1bfc00000, data 0x3f2c64e/0x40f2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180576256 unmapped: 9871360 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180576256 unmapped: 9871360 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 264 heartbeat osd_stat(store_statfs(0x1b26f7000/0x0/0x1bfc00000, data 0x3f2e937/0x40f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180584448 unmapped: 9863168 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180584448 unmapped: 9863168 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2389969 data_alloc: 184549376 data_used: 12042240
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180592640 unmapped: 9854976 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180617216 unmapped: 9830400 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180617216 unmapped: 9830400 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 265 heartbeat osd_stat(store_statfs(0x1b26f0000/0x0/0x1bfc00000, data 0x3f30e3e/0x40fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 265 heartbeat osd_stat(store_statfs(0x1b26f0000/0x0/0x1bfc00000, data 0x3f30e3e/0x40fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180617216 unmapped: 9830400 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 265 heartbeat osd_stat(store_statfs(0x1b26ef000/0x0/0x1bfc00000, data 0x3f30ec9/0x40fe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180625408 unmapped: 9822208 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2399853 data_alloc: 184549376 data_used: 12054528
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180625408 unmapped: 9822208 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.737156868s of 10.180901527s, submitted: 72
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180633600 unmapped: 9814016 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 266 heartbeat osd_stat(store_statfs(0x1b26ed000/0x0/0x1bfc00000, data 0x3f33194/0x4100000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180633600 unmapped: 9814016 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 51
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180641792 unmapped: 9805824 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180641792 unmapped: 9805824 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2406905 data_alloc: 184549376 data_used: 12066816
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180641792 unmapped: 9805824 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180649984 unmapped: 9797632 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180649984 unmapped: 9797632 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e4000/0x0/0x1bfc00000, data 0x3f35851/0x4109000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180658176 unmapped: 9789440 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e5000/0x0/0x1bfc00000, data 0x3f3570f/0x4108000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180658176 unmapped: 9789440 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2415519 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e5000/0x0/0x1bfc00000, data 0x3f3570f/0x4108000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e7000/0x0/0x1bfc00000, data 0x3f355fd/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.402598381s of 13.594120026s, submitted: 76
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2414701 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e6000/0x0/0x1bfc00000, data 0x3f355fb/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180666368 unmapped: 9781248 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2414315 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e9000/0x0/0x1bfc00000, data 0x3f3543e/0x4105000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e9000/0x0/0x1bfc00000, data 0x3f3543e/0x4105000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e9000/0x0/0x1bfc00000, data 0x3f3543e/0x4105000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2413145 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e9000/0x0/0x1bfc00000, data 0x3f3543e/0x4105000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.315948486s of 10.389658928s, submitted: 16
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e8000/0x0/0x1bfc00000, data 0x3f3549d/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2416505 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e7000/0x0/0x1bfc00000, data 0x3f35538/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180674560 unmapped: 9773056 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e8000/0x0/0x1bfc00000, data 0x3f3549d/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180682752 unmapped: 9764864 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180682752 unmapped: 9764864 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180682752 unmapped: 9764864 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2417407 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180690944 unmapped: 9756672 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.775315285s of 10.830597878s, submitted: 8
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e7000/0x0/0x1bfc00000, data 0x3f35538/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180699136 unmapped: 9748480 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180699136 unmapped: 9748480 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180699136 unmapped: 9748480 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180699136 unmapped: 9748480 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2415943 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e7000/0x0/0x1bfc00000, data 0x3f3546e/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180699136 unmapped: 9748480 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180707328 unmapped: 9740288 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180707328 unmapped: 9740288 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e8000/0x0/0x1bfc00000, data 0x3f3546e/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180715520 unmapped: 9732096 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180715520 unmapped: 9732096 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2415639 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180715520 unmapped: 9732096 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.925881386s of 10.001863480s, submitted: 15
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180715520 unmapped: 9732096 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e7000/0x0/0x1bfc00000, data 0x3f35509/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180723712 unmapped: 9723904 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e8000/0x0/0x1bfc00000, data 0x3f3546e/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180723712 unmapped: 9723904 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180723712 unmapped: 9723904 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2416717 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180723712 unmapped: 9723904 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26e8000/0x0/0x1bfc00000, data 0x3f3546e/0x4106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180740096 unmapped: 9707520 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180740096 unmapped: 9707520 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180740096 unmapped: 9707520 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2412879 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26ec000/0x0/0x1bfc00000, data 0x3f352d8/0x4102000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.943022728s of 10.003229141s, submitted: 13
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26ec000/0x0/0x1bfc00000, data 0x3f352d8/0x4102000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2412879 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180748288 unmapped: 9699328 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180764672 unmapped: 9682944 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180764672 unmapped: 9682944 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180764672 unmapped: 9682944 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180764672 unmapped: 9682944 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2414823 data_alloc: 184549376 data_used: 12079104
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b26eb000/0x0/0x1bfc00000, data 0x3f35373/0x4103000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 180764672 unmapped: 9682944 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.958279610s of 10.000579834s, submitted: 9
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181821440 unmapped: 8626176 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 52
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181829632 unmapped: 8617984 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181829632 unmapped: 8617984 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 268 heartbeat osd_stat(store_statfs(0x1b26e6000/0x0/0x1bfc00000, data 0x3f37744/0x4107000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181829632 unmapped: 8617984 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2418689 data_alloc: 184549376 data_used: 12091392
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181829632 unmapped: 8617984 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2420985 data_alloc: 184549376 data_used: 12091392
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b26e3000/0x0/0x1bfc00000, data 0x3f398f7/0x410a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.853247643s of 10.000213623s, submitted: 58
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2421697 data_alloc: 184549376 data_used: 12091392
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b26e3000/0x0/0x1bfc00000, data 0x3f39992/0x410b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181837824 unmapped: 8609792 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b26e3000/0x0/0x1bfc00000, data 0x3f39992/0x410b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181846016 unmapped: 8601600 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 181854208 unmapped: 8593408 heap: 190447616 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2428557 data_alloc: 184549376 data_used: 12091392
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.937476158s of 10.003174782s, submitted: 14
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 182943744 unmapped: 8552448 heap: 191496192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b26cc000/0x0/0x1bfc00000, data 0x3f4f676/0x4122000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 182960128 unmapped: 8536064 heap: 191496192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 182968320 unmapped: 8527872 heap: 191496192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 182968320 unmapped: 8527872 heap: 191496192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b368f000/0x0/0x1bfc00000, data 0x3f8a601/0x415e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183115776 unmapped: 8380416 heap: 191496192 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2437369 data_alloc: 184549376 data_used: 12103680
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b3664000/0x0/0x1bfc00000, data 0x3fb7a46/0x418a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183230464 unmapped: 9314304 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183238656 unmapped: 9306112 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183238656 unmapped: 9306112 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183246848 unmapped: 9297920 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183246848 unmapped: 9297920 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2439413 data_alloc: 184549376 data_used: 12115968
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183435264 unmapped: 9109504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b35e3000/0x0/0x1bfc00000, data 0x403780c/0x420b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.727283478s of 10.990750313s, submitted: 88
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 ms_handle_reset con 0x55721403f000 session 0x55721211f860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183713792 unmapped: 8830976 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183771136 unmapped: 8773632 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183500800 unmapped: 9043968 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 53
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b35e4000/0x0/0x1bfc00000, data 0x40378e6/0x420a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183705600 unmapped: 8839168 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b35be000/0x0/0x1bfc00000, data 0x405d132/0x4230000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2444943 data_alloc: 184549376 data_used: 12115968
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183705600 unmapped: 8839168 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184762368 unmapped: 7782400 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184762368 unmapped: 7782400 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184819712 unmapped: 7725056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184819712 unmapped: 7725056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2449653 data_alloc: 184549376 data_used: 12128256
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 272 heartbeat osd_stat(store_statfs(0x1b3594000/0x0/0x1bfc00000, data 0x4085e3c/0x4259000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183771136 unmapped: 8773632 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.15] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1a] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183762944 unmapped: 8781824 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183762944 unmapped: 8781824 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 87K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 8146 syncs, 2.84 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 15K writes, 54K keys, 15K commit groups, 1.0 writes per commit group, ingest: 48.02 MB, 0.08 MB/s#012Interval WAL: 15K writes, 6074 syncs, 2.47 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452055 data_alloc: 184549376 data_used: 12140544
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452055 data_alloc: 184549376 data_used: 12140544
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452055 data_alloc: 184549376 data_used: 12140544
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452055 data_alloc: 184549376 data_used: 12140544
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183779328 unmapped: 8765440 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183787520 unmapped: 8757248 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183787520 unmapped: 8757248 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183795712 unmapped: 8749056 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183803904 unmapped: 8740864 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183820288 unmapped: 8724480 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183820288 unmapped: 8724480 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183828480 unmapped: 8716288 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183836672 unmapped: 8708096 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183836672 unmapped: 8708096 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183836672 unmapped: 8708096 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183836672 unmapped: 8708096 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183836672 unmapped: 8708096 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183844864 unmapped: 8699904 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183853056 unmapped: 8691712 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183853056 unmapped: 8691712 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183853056 unmapped: 8691712 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183853056 unmapped: 8691712 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357d000/0x0/0x1bfc00000, data 0x409a7a0/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183861248 unmapped: 8683520 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2452215 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 88.848640442s of 89.085029602s, submitted: 336
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 ms_handle_reset con 0x55720f447400 session 0x557211f143c0
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184303616 unmapped: 8241152 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184303616 unmapped: 8241152 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Got map version 54
Nov 23 05:14:53 localhost ceph-osd[32615]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 ms_handle_reset con 0x557212256c00 session 0x557211bde780
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184451072 unmapped: 8093696 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184459264 unmapped: 8085504 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 ms_handle_reset con 0x5572121b1000 session 0x5572120fd860
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184467456 unmapped: 8077312 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184475648 unmapped: 8069120 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184483840 unmapped: 8060928 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184483840 unmapped: 8060928 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184483840 unmapped: 8060928 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184483840 unmapped: 8060928 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184483840 unmapped: 8060928 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184492032 unmapped: 8052736 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184492032 unmapped: 8052736 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184492032 unmapped: 8052736 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184500224 unmapped: 8044544 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184500224 unmapped: 8044544 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184508416 unmapped: 8036352 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b357e000/0x0/0x1bfc00000, data 0x409a9b3/0x4270000, compress 0x0/0x0/0x0, omap 0x649, meta 0x840f9b7), peers [0,1,2,3,5] op hist [])
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 184516608 unmapped: 8028160 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'config diff' '{prefix=config diff}'
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'config show' '{prefix=config show}'
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'counter dump' '{prefix=counter dump}'
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'counter schema' '{prefix=counter schema}'
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183812096 unmapped: 8732672 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: prioritycache tune_memory target: 3561598361 mapped: 183820288 unmapped: 8724480 heap: 192544768 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:53 localhost ceph-osd[32615]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:53 localhost ceph-osd[32615]: bluestore.MempoolThread(0x55720e51bb60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451735 data_alloc: 184549376 data_used: 12144640
Nov 23 05:14:53 localhost ceph-osd[32615]: do_command 'log dump' '{prefix=log dump}'
Nov 23 05:14:53 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 23 05:14:53 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4235157921' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Nov 23 05:14:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 23 05:14:54 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3028301708' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Nov 23 05:14:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 23 05:14:54 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1806367899' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Nov 23 05:14:54 localhost nova_compute[281613]: 2025-11-23 10:14:54.920 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:54 localhost ceph-mon[302802]: mon.np0005532586@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 23 05:14:55 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 23 05:14:55 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3213237305' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Nov 23 05:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.
Nov 23 05:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.
Nov 23 05:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.
Nov 23 05:14:56 localhost podman[329805]: 2025-11-23 10:14:56.224155199 +0000 UTC m=+0.134110586 container health_status 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 23 05:14:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 23 05:14:56 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/973500516' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Nov 23 05:14:56 localhost podman[329806]: 2025-11-23 10:14:56.268657265 +0000 UTC m=+0.174823249 container health_status 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 23 05:14:56 localhost podman[329805]: 2025-11-23 10:14:56.340181328 +0000 UTC m=+0.250136675 container exec_died 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 23 05:14:56 localhost nova_compute[281613]: 2025-11-23 10:14:56.346 281617 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 23 05:14:56 localhost systemd[1]: 4fdf743bd5fb9c87d94338f0d9bac1d9d64815248cf953d76497b7ac34b3a135.service: Deactivated successfully.
Nov 23 05:14:56 localhost podman[329806]: 2025-11-23 10:14:56.364818699 +0000 UTC m=+0.270984643 container exec_died 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 23 05:14:56 localhost systemd[1]: 7476dabd699e508fab35f05b15aa24de9496410b59241f0a5b15308521942694.service: Deactivated successfully.
Nov 23 05:14:56 localhost podman[329804]: 2025-11-23 10:14:56.3410297 +0000 UTC m=+0.237367250 container health_status 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public)
Nov 23 05:14:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 23 05:14:56 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3626344404' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Nov 23 05:14:56 localhost podman[329804]: 2025-11-23 10:14:56.423788394 +0000 UTC m=+0.320125944 container exec_died 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, version=9.6)
Nov 23 05:14:56 localhost systemd[1]: 02f06a89d7f8b1cd8703455e636d3eae5f506d07f4f8324cf8cca555fbc04390.service: Deactivated successfully.
Nov 23 05:14:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 23 05:14:56 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/37439561' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Nov 23 05:14:56 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 23 05:14:56 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1303286139' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2474890893' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/78207861' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/840506094' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3752014094' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/789313433' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Nov 23 05:14:57 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 23 05:14:57 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1806976131' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Nov 23 05:14:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 23 05:14:58 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1386348520' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Nov 23 05:14:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 23 05:14:58 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4083050300' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Nov 23 05:14:58 localhost ceph-mon[302802]: mon.np0005532586@2(peon) e17 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 23 05:14:58 localhost ceph-mon[302802]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1996105483' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Nov 23 05:14:58 localhost ceph-osd[31668]: set_mon_vals no callback set
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 33
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/4027327596,v1:172.18.0.107:6811/4027327596]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 34
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/4027327596,v1:172.18.0.107:6811/4027327596]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 861205 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87580672 unmapped: 4644864 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 heartbeat osd_stat(store_statfs(0x1ba752000/0x0/0x1bfc00000, data 0x2462f61/0x24db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 35
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now 
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect Terminating session with v2:172.18.0.107:6810/4027327596
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect No active mgr available yet
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 107.726783752s of 107.734596252s, submitted: 2
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87597056 unmapped: 4628480 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 ms_handle_reset con 0x5606ed34e400 session 0x5606ed2054a0
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 36
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_configure stats_period=5
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87613440 unmapped: 4612096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87613440 unmapped: 4612096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87613440 unmapped: 4612096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 37
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 38
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 39
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/335107178,v1:172.18.0.108:6811/335107178]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87760896 unmapped: 4464640 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87769088 unmapped: 4456448 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87752704 unmapped: 4472832 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 heartbeat osd_stat(store_statfs(0x1ba74e000/0x0/0x1bfc00000, data 0x2465121/0x24de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87752704 unmapped: 4472832 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87752704 unmapped: 4472832 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87752704 unmapped: 4472832 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 87752704 unmapped: 4472832 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 40
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now 
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect Terminating session with v2:172.18.0.108:6810/335107178
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect No active mgr available yet
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 ms_handle_reset con 0x5606ed208000 session 0x5606ed204780
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 864849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.715499878s of 82.720512390s, submitted: 1
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88965120 unmapped: 3260416 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 41
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc reconnect Starting new session with [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_configure stats_period=5
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 42
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 43
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88637440 unmapped: 3588096 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 867849 data_alloc: 184549376 data_used: 5111808
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 88645632 unmapped: 3579904 heap: 92225536 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 heartbeat osd_stat(store_statfs(0x1b95aa000/0x0/0x1bfc00000, data 0x246748f/0x24e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.202392578s of 82.214607239s, submitted: 1
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 98451456 unmapped: 8462336 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90038272 unmapped: 16875520 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 88 ms_handle_reset con 0x5606ed34f800 session 0x5606eab58780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 88 heartbeat osd_stat(store_statfs(0x1b813a000/0x0/0x1bfc00000, data 0x38d74c2/0x3954000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 98541568 unmapped: 8372224 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1074892 data_alloc: 184549376 data_used: 5124096
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 88 heartbeat osd_stat(store_statfs(0x1b8133000/0x0/0x1bfc00000, data 0x38d985d/0x395a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90210304 unmapped: 16703488 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 ms_handle_reset con 0x5606ee6e1800 session 0x5606eb998000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90251264 unmapped: 16662528 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90251264 unmapped: 16662528 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90267648 unmapped: 16646144 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90267648 unmapped: 16646144 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90267648 unmapped: 16646144 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 44
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89718784 unmapped: 17195008 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89718784 unmapped: 17195008 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89718784 unmapped: 17195008 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89718784 unmapped: 17195008 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89735168 unmapped: 17178624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1166508 data_alloc: 184549376 data_used: 5136384
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 heartbeat osd_stat(store_statfs(0x1b6cbe000/0x0/0x1bfc00000, data 0x4d4bbe8/0x4dcf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89710592 unmapped: 17203200 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 69.072753906s of 69.388549805s, submitted: 47
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 ms_handle_reset con 0x5606ede93c00 session 0x5606eceb8f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89251840 unmapped: 17661952 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b6cb8000/0x0/0x1bfc00000, data 0x4d4e373/0x4dd5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89251840 unmapped: 17661952 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b6cb8000/0x0/0x1bfc00000, data 0x4d4e373/0x4dd5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89251840 unmapped: 17661952 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b6cb8000/0x0/0x1bfc00000, data 0x4d4e373/0x4dd5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89251840 unmapped: 17661952 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1174769 data_alloc: 184549376 data_used: 5148672
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89251840 unmapped: 17661952 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89219072 unmapped: 17694720 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 91 ms_handle_reset con 0x5606ede93c00 session 0x5606ee8745a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b6cb7000/0x0/0x1bfc00000, data 0x4d5030c/0x4dd7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b6cb7000/0x0/0x1bfc00000, data 0x4d5030c/0x4dd7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1176289 data_alloc: 184549376 data_used: 5160960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 89128960 unmapped: 17784832 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.196440697s of 12.450105667s, submitted: 68
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed208000 session 0x5606ee6a0d20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 90210304 unmapped: 16703488 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34f800 session 0x5606ee8754a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 93831168 unmapped: 13082624 heap: 106913792 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1193570 data_alloc: 184549376 data_used: 9826304
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b6cb2000/0x0/0x1bfc00000, data 0x4d5255a/0x4ddb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,3,1,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e1800 session 0x5606eab5e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e1c00 session 0x5606ee8752c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e1c00 session 0x5606ee6a12c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed208000 session 0x5606ea646780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34f800 session 0x5606edea8d20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95330304 unmapped: 19464192 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95363072 unmapped: 19431424 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b88000/0x0/0x1bfc00000, data 0x5e7c55a/0x5f05000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede93c00 session 0x5606edea32c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95363072 unmapped: 19431424 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b88000/0x0/0x1bfc00000, data 0x5e7c55a/0x5f05000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95363072 unmapped: 19431424 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e1800 session 0x5606ee872780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed208000 session 0x5606ee87a000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34f800 session 0x5606edea3680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95068160 unmapped: 19726336 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1339987 data_alloc: 184549376 data_used: 9826304
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95068160 unmapped: 19726336 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b63000/0x0/0x1bfc00000, data 0x5ea058d/0x5f2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95068160 unmapped: 19726336 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95068160 unmapped: 19726336 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 95068160 unmapped: 19726336 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b63000/0x0/0x1bfc00000, data 0x5ea058d/0x5f2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 97460224 unmapped: 17334272 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1368147 data_alloc: 184549376 data_used: 13238272
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 100966400 unmapped: 13828096 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102432768 unmapped: 12361728 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102432768 unmapped: 12361728 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102440960 unmapped: 12353536 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b63000/0x0/0x1bfc00000, data 0x5ea058d/0x5f2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102490112 unmapped: 12304384 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1416787 data_alloc: 184549376 data_used: 16941056
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102490112 unmapped: 12304384 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b5b63000/0x0/0x1bfc00000, data 0x5ea058d/0x5f2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede92400 session 0x5606ecc5b4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34f400 session 0x5606ea9fa1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102498304 unmapped: 12296192 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ecaec400 session 0x5606ed4010e0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.390893936s of 18.673543930s, submitted: 69
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed208000 session 0x5606ee872b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9800 session 0x5606edea2780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 102219776 unmapped: 12574720 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ec670b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec66cb40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9c00 session 0x5606ed166f00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9c00 session 0x5606ee87cf00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 104251392 unmapped: 10543104 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ed166b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed208000 session 0x5606ea6485a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b504e000/0x0/0x1bfc00000, data 0x69b35ff/0x6a40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 104005632 unmapped: 10788864 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1512287 data_alloc: 184549376 data_used: 16965632
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b504e000/0x0/0x1bfc00000, data 0x69b35ff/0x6a40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede93c00 session 0x5606edea30e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e1c00 session 0x5606ed3ce5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 104816640 unmapped: 9977856 heap: 114794496 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9c00 session 0x5606ed3d32c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede93c00 session 0x5606ed40bc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34b400 session 0x5606ed1672c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114515968 unmapped: 1826816 heap: 116342784 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede49000 session 0x5606ee458b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606eca21680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 110280704 unmapped: 7110656 heap: 117391360 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34b400 session 0x5606ea6472c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3df7000/0x0/0x1bfc00000, data 0x7c09622/0x7c97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede74800 session 0x5606ecd47860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 111435776 unmapped: 13303808 heap: 124739584 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e0800 session 0x5606ee875680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede8e800 session 0x5606ee69fe00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede8e800 session 0x5606ed3d3e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ed166960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34b400 session 0x5606ed400780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 110247936 unmapped: 14491648 heap: 124739584 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1735503 data_alloc: 201326592 data_used: 17154048
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede74800 session 0x5606ed40b2c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e0800 session 0x5606ed3d2000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3531000/0x0/0x1bfc00000, data 0x84cd694/0x855d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e0800 session 0x5606ecbf5e00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 110542848 unmapped: 14196736 heap: 124739584 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3531000/0x0/0x1bfc00000, data 0x84cd694/0x855d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [1,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ed40af00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9800 session 0x5606ed40b680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ede38f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 111435776 unmapped: 17498112 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.973767281s of 10.132407188s, submitted: 324
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b2571000/0x0/0x1bfc00000, data 0x81b26c7/0x8244000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede8e800 session 0x5606ec66e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 107241472 unmapped: 21692416 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 108748800 unmapped: 20185088 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 109617152 unmapped: 19316736 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1577564 data_alloc: 201326592 data_used: 17534976
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 109617152 unmapped: 19316736 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ed40b0e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 109641728 unmapped: 19292160 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 110018560 unmapped: 18915328 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b492e000/0x0/0x1bfc00000, data 0x70cf6b7/0x7160000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 111304704 unmapped: 17629184 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113115136 unmapped: 15818752 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1626258 data_alloc: 201326592 data_used: 22700032
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113147904 unmapped: 15785984 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113221632 unmapped: 15712256 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.615267754s of 10.048606873s, submitted: 119
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 119332864 unmapped: 9601024 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3dc1000/0x0/0x1bfc00000, data 0x7c3c6b7/0x7ccd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118374400 unmapped: 10559488 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3dc1000/0x0/0x1bfc00000, data 0x7c3c6b7/0x7ccd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118415360 unmapped: 10518528 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1759914 data_alloc: 201326592 data_used: 22896640
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3c9800 session 0x5606ed401a40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122175488 unmapped: 6758400 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121536512 unmapped: 7397376 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121602048 unmapped: 7331840 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121774080 unmapped: 7159808 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121782272 unmapped: 7151616 heap: 128933888 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1900612 data_alloc: 201326592 data_used: 22966272
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b2fae000/0x0/0x1bfc00000, data 0x8a496b7/0x8ada000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127451136 unmapped: 2531328 heap: 129982464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128000000 unmapped: 1982464 heap: 129982464 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ec64a5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.060380936s of 10.225013733s, submitted: 348
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121856000 unmapped: 9175040 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ee6e0800 session 0x5606ee458960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121888768 unmapped: 9142272 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122085376 unmapped: 8945664 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1690594 data_alloc: 184549376 data_used: 15872000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b3e24000/0x0/0x1bfc00000, data 0x7a4a694/0x7ada000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122118144 unmapped: 8912896 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ed34b400 session 0x5606edea3c20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606ede74800 session 0x5606ed40a780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122118144 unmapped: 8912896 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecce23c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121610240 unmapped: 9420800 heap: 131031040 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 ms_handle_reset con 0x5606ed34b400 session 0x5606ede39860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ed3d2780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 ms_handle_reset con 0x5606ed3c9800 session 0x5606ede394a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 132194304 unmapped: 12042240 heap: 144236544 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 ms_handle_reset con 0x5606ed3c9800 session 0x5606ee69ed20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 132825088 unmapped: 30515200 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1991545 data_alloc: 201326592 data_used: 24276992
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b1e04000/0x0/0x1bfc00000, data 0x9bf1ad0/0x9c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 94 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee4581e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 132833280 unmapped: 30507008 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 ms_handle_reset con 0x5606ed34b400 session 0x5606edea2000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ec66cb40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 132841472 unmapped: 30498816 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 ms_handle_reset con 0x5606ede74800 session 0x5606eb9a4d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 ms_handle_reset con 0x5606ede74800 session 0x5606ec66c000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.610923767s of 10.351254463s, submitted: 179
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 119570432 unmapped: 43769856 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 ms_handle_reset con 0x5606eb3eb400 session 0x5606eab8cf00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115703808 unmapped: 47636480 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 ms_handle_reset con 0x5606ed34b400 session 0x5606ec6a0780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 ms_handle_reset con 0x5606ed3c9800 session 0x5606ec66f0e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b3422000/0x0/0x1bfc00000, data 0x85d5519/0x866a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115720192 unmapped: 47620096 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ecebab40
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1748505 data_alloc: 184549376 data_used: 7847936
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 ms_handle_reset con 0x5606ed208000 session 0x5606ee87a5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ecb95e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115826688 unmapped: 47513600 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 heartbeat osd_stat(store_statfs(0x1b341d000/0x0/0x1bfc00000, data 0x85d789d/0x866e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,5])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ed34b400 session 0x5606ee87ad20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ed3c9800 session 0x5606ea9fb0e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ede74800 session 0x5606eceb92c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ed208000 session 0x5606ecb80780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec14c1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 ms_handle_reset con 0x5606ed34b400 session 0x5606eb34c1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 heartbeat osd_stat(store_statfs(0x1b4508000/0x0/0x1bfc00000, data 0x6f34fd2/0x6fcd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ee6e0800 session 0x5606ecc6ed20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed3c9800 session 0x5606ea9fa000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed3c9800 session 0x5606ece4be00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb9a14a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114827264 unmapped: 48513024 heap: 163340288 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed208000 session 0x5606ecc6f4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ecc914a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed34b400 session 0x5606eb99dc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 ms_handle_reset con 0x5606ed34b400 session 0x5606ec14d860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127574016 unmapped: 47251456 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 ms_handle_reset con 0x5606ed208000 session 0x5606ee87cf00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecc91680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127541248 unmapped: 47284224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 ms_handle_reset con 0x5606ed3c9800 session 0x5606ec14cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 ms_handle_reset con 0x5606ed3cdc00 session 0x5606ecc90d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec66c3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127516672 unmapped: 47308800 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1827136 data_alloc: 201326592 data_used: 16863232
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 100 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecc6eb40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117547008 unmapped: 57278464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b4d54000/0x0/0x1bfc00000, data 0x689a2b1/0x6933000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116809728 unmapped: 58015744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116817920 unmapped: 58007552 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.373248100s of 10.632122040s, submitted: 352
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113221632 unmapped: 61603840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 101 heartbeat osd_stat(store_statfs(0x1b4d54000/0x0/0x1bfc00000, data 0x689a2b1/0x6933000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113221632 unmapped: 61603840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 101 heartbeat osd_stat(store_statfs(0x1b4d56000/0x0/0x1bfc00000, data 0x689c51b/0x6937000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1556153 data_alloc: 184549376 data_used: 5095424
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113221632 unmapped: 61603840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113221632 unmapped: 61603840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 101 heartbeat osd_stat(store_statfs(0x1b4d56000/0x0/0x1bfc00000, data 0x689c51b/0x6937000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113270784 unmapped: 61554688 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113270784 unmapped: 61554688 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113270784 unmapped: 61554688 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1559155 data_alloc: 184549376 data_used: 5095424
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b4d52000/0x0/0x1bfc00000, data 0x689e769/0x693b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113295360 unmapped: 61530112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 113090560 unmapped: 61734912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114368512 unmapped: 60456960 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114581504 unmapped: 60243968 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114581504 unmapped: 60243968 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b4d53000/0x0/0x1bfc00000, data 0x689e769/0x693b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1577791 data_alloc: 184549376 data_used: 7172096
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.012166977s of 12.159733772s, submitted: 55
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a0b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 ms_handle_reset con 0x5606ed34b400 session 0x5606ec66da40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114589696 unmapped: 60235776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 ms_handle_reset con 0x5606ee6e0000 session 0x5606eb9a41e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b4d53000/0x0/0x1bfc00000, data 0x689e769/0x693b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb9a85a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 ms_handle_reset con 0x5606ed34b400 session 0x5606eb9a8b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 ms_handle_reset con 0x5606ed208000 session 0x5606eab8d860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 heartbeat osd_stat(store_statfs(0x1b4d4e000/0x0/0x1bfc00000, data 0x68a0ad1/0x693f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 130883584 unmapped: 43941888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecde8d20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122839040 unmapped: 51986432 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 104 ms_handle_reset con 0x5606ee6e0400 session 0x5606ee4594a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122929152 unmapped: 51896320 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 ms_handle_reset con 0x5606ee6e0400 session 0x5606ecd89e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 ms_handle_reset con 0x5606ede91800 session 0x5606eb9a8780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecb943c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 ms_handle_reset con 0x5606ed208000 session 0x5606ecb7e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 123019264 unmapped: 51806208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 ms_handle_reset con 0x5606ed34b400 session 0x5606ecebbc20
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1856019 data_alloc: 201326592 data_used: 15552512
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 heartbeat osd_stat(store_statfs(0x1b306f000/0x0/0x1bfc00000, data 0x8578971/0x861d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 123125760 unmapped: 51699712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 106 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a8000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 106 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecb81a40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ede91800 session 0x5606ecde8b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ee6e0400 session 0x5606ea72d860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecebbe00
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1587004 data_alloc: 184549376 data_used: 2961408
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecb941e0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.165775299s of 10.004096985s, submitted: 210
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ed208000 session 0x5606ecc91680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb999e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ee6e0400 session 0x5606ecb7fc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ed3cd800 session 0x5606ecdd94a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec6054a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 ms_handle_reset con 0x5606ed208000 session 0x5606ec604f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118702080 unmapped: 56123392 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecde83c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 ms_handle_reset con 0x5606ede91800 session 0x5606ecb81c20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b393b000/0x0/0x1bfc00000, data 0x7cad8a6/0x7d53000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128696320 unmapped: 46129152 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 ms_handle_reset con 0x5606ee6e0400 session 0x5606ecb7f680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 109 ms_handle_reset con 0x5606ed208000 session 0x5606ec6050e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 109 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecd89c20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128696320 unmapped: 46129152 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 109 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecdd9860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 109 ms_handle_reset con 0x5606ede91800 session 0x5606ecd890e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 110 heartbeat osd_stat(store_statfs(0x1b36df000/0x0/0x1bfc00000, data 0x7f01029/0x7fac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 129982464 unmapped: 44843008 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 ms_handle_reset con 0x5606ed203c00 session 0x5606ecd88b40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116137984 unmapped: 58687488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b36db000/0x0/0x1bfc00000, data 0x7f03231/0x7faf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1562715 data_alloc: 184549376 data_used: 3641344
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116137984 unmapped: 58687488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 ms_handle_reset con 0x5606ed34ec00 session 0x5606eb9a90e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 ms_handle_reset con 0x5606ed3c6c00 session 0x5606ee458000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116563968 unmapped: 58261504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 ms_handle_reset con 0x5606ed203c00 session 0x5606ecdd8780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115097600 unmapped: 59727872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115105792 unmapped: 59719680 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115113984 unmapped: 59711488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115122176 unmapped: 59703296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115130368 unmapped: 59695104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115130368 unmapped: 59695104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115138560 unmapped: 59686912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115146752 unmapped: 59678720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115154944 unmapped: 59670528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115163136 unmapped: 59662336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115171328 unmapped: 59654144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x4d7e828/0x4e2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1406202 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115179520 unmapped: 59645952 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 86.525711060s of 87.183181763s, submitted: 241
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115187712 unmapped: 59637760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115195904 unmapped: 59629568 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b6859000/0x0/0x1bfc00000, data 0x4d82f4c/0x4e33000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b6859000/0x0/0x1bfc00000, data 0x4d82f4c/0x4e33000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115245056 unmapped: 59580416 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1415806 data_alloc: 184549376 data_used: 2973696
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b685b000/0x0/0x1bfc00000, data 0x4d82f4c/0x4e33000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115253248 unmapped: 59572224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b685b000/0x0/0x1bfc00000, data 0x4d82f4c/0x4e33000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1417887 data_alloc: 184549376 data_used: 2985984
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1417887 data_alloc: 184549376 data_used: 2985984
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115261440 unmapped: 59564032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.186245918s of 14.367182732s, submitted: 58
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec66c000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 ms_handle_reset con 0x5606ed208000 session 0x5606ec66cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117891072 unmapped: 56934400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117891072 unmapped: 56934400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117891072 unmapped: 56934400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117891072 unmapped: 56934400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1419087 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117899264 unmapped: 56926208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117899264 unmapped: 56926208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117899264 unmapped: 56926208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117899264 unmapped: 56926208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 ms_handle_reset con 0x5606ed208000 session 0x5606ecdd9e00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 119373824 unmapped: 55451648 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1515716 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118677504 unmapped: 56147968 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb9a52c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1429812 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1429812 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.097608566s of 20.559774399s, submitted: 101
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118685696 unmapped: 56139776 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1430545 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1431582 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1431582 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118710272 unmapped: 56115200 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1431582 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.505525589s of 18.583652496s, submitted: 20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6856000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118734848 unmapped: 56090624 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 9597 writes, 38K keys, 9597 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9597 writes, 2407 syncs, 3.99 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3824 writes, 13K keys, 3824 commit groups, 1.0 writes per commit group, ingest: 14.85 MB, 0.02 MB/s#012Interval WAL: 3824 writes, 1626 syncs, 2.35 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118734848 unmapped: 56090624 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118767616 unmapped: 56057856 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118767616 unmapped: 56057856 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118767616 unmapped: 56057856 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1432924 data_alloc: 184549376 data_used: 6918144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118775808 unmapped: 56049664 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x4d8519a/0x4e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118775808 unmapped: 56049664 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118824960 unmapped: 56000512 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 ms_handle_reset con 0x5606ed203c00 session 0x5606ee883680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118857728 unmapped: 55967744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118857728 unmapped: 55967744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1441747 data_alloc: 184549376 data_used: 6930432
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.283382416s of 10.543031693s, submitted: 65
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118857728 unmapped: 55967744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 heartbeat osd_stat(store_statfs(0x1b6850000/0x0/0x1bfc00000, data 0x4d87548/0x4e3d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118857728 unmapped: 55967744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 heartbeat osd_stat(store_statfs(0x1b6850000/0x0/0x1bfc00000, data 0x4d87566/0x4e3e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 118874112 unmapped: 55951360 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125378560 unmapped: 49446912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 116973568 unmapped: 57851904 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1552304 data_alloc: 184549376 data_used: 6930432
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 123985920 unmapped: 50839552 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 heartbeat osd_stat(store_statfs(0x1b4050000/0x0/0x1bfc00000, data 0x758756e/0x763e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114122752 unmapped: 60702720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 117 ms_handle_reset con 0x5606ed34ec00 session 0x5606ecdd8f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114130944 unmapped: 60694528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114114560 unmapped: 60710912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 118 ms_handle_reset con 0x5606ed3c6c00 session 0x5606eb998780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114204672 unmapped: 60620800 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1457376 data_alloc: 184549376 data_used: 6955008
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 118 heartbeat osd_stat(store_statfs(0x1b6047000/0x0/0x1bfc00000, data 0x4d8bc6c/0x4e45000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.417974472s of 10.016163826s, submitted: 133
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 119 ms_handle_reset con 0x5606ed3c6c00 session 0x5606ecde94a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114245632 unmapped: 60579840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114262016 unmapped: 60563456 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b6043000/0x0/0x1bfc00000, data 0x4d8dfe2/0x4e47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114262016 unmapped: 60563456 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114262016 unmapped: 60563456 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114286592 unmapped: 60538880 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1466257 data_alloc: 184549376 data_used: 6967296
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 120 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee87a780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114343936 unmapped: 60481536 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 121 ms_handle_reset con 0x5606ed203c00 session 0x5606ee87af00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114360320 unmapped: 60465152 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 122 ms_handle_reset con 0x5606ed208000 session 0x5606ee87be00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 ms_handle_reset con 0x5606ed34ec00 session 0x5606ee87ab40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114401280 unmapped: 60424192 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b682e000/0x0/0x1bfc00000, data 0x4d971f1/0x4e5f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114425856 unmapped: 60399616 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 124 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee87a1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114466816 unmapped: 60358656 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1513179 data_alloc: 184549376 data_used: 6979584
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b681f000/0x0/0x1bfc00000, data 0x4d9c022/0x4e6c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 125 ms_handle_reset con 0x5606ed203c00 session 0x5606ee87a3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.505522728s of 10.073255539s, submitted: 134
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114532352 unmapped: 60293120 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 125 heartbeat osd_stat(store_statfs(0x1b681d000/0x0/0x1bfc00000, data 0x4d9e38a/0x4e70000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 114532352 unmapped: 60293120 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 126 ms_handle_reset con 0x5606ed208000 session 0x5606ee87b4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115646464 unmapped: 59179008 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 126 ms_handle_reset con 0x5606ede91800 session 0x5606ee87cb40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115802112 unmapped: 59023360 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 ms_handle_reset con 0x5606ed3c6c00 session 0x5606ee87cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b6416000/0x0/0x1bfc00000, data 0x4da26f6/0x4e76000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 115851264 unmapped: 58974208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1530357 data_alloc: 184549376 data_used: 6991872
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb99f4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 120135680 unmapped: 54689792 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 ms_handle_reset con 0x5606ede91800 session 0x5606ecc903c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 heartbeat osd_stat(store_statfs(0x1b5811000/0x0/0x1bfc00000, data 0x59a4c40/0x5a79000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 ms_handle_reset con 0x5606ede8e400 session 0x5606ecc90d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128606208 unmapped: 46219264 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 129802240 unmapped: 45023232 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 130 ms_handle_reset con 0x5606ed208000 session 0x5606eb99f2c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 117325824 unmapped: 57499648 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135553024 unmapped: 39272448 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2651801 data_alloc: 184549376 data_used: 7004160
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 131 ms_handle_reset con 0x5606ed3ca400 session 0x5606ee459680
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.043172836s of 10.053074837s, submitted: 402
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127524864 unmapped: 47300608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 131 heartbeat osd_stat(store_statfs(0x1ab010000/0x0/0x1bfc00000, data 0x101a9a30/0x1027d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 119406592 unmapped: 55418880 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 132 ms_handle_reset con 0x5606ed3ca400 session 0x5606ee87dc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 132 heartbeat osd_stat(store_statfs(0x1a840c000/0x0/0x1bfc00000, data 0x12dabde5/0x12e80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 143990784 unmapped: 30834688 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 120217600 unmapped: 54607872 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 129040384 unmapped: 45785088 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 heartbeat osd_stat(store_statfs(0x1a140b000/0x0/0x1bfc00000, data 0x19dad591/0x19e82000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4122322 data_alloc: 184549376 data_used: 7016448
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 120971264 unmapped: 53854208 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 129613824 unmapped: 45211648 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 ms_handle_reset con 0x5606eb3eb400 session 0x5606eceb9e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 heartbeat osd_stat(store_statfs(0x19b00b000/0x0/0x1bfc00000, data 0x201ad5a1/0x20283000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125730816 unmapped: 49094656 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121823232 unmapped: 53002240 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed3c9800 session 0x5606ee87a5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a7860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed203c00 session 0x5606eb99e1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121864192 unmapped: 52961280 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4963049 data_alloc: 184549376 data_used: 7028736
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.431215286s of 10.028745651s, submitted: 190
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 121970688 unmapped: 52854784 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed208000 session 0x5606ee459a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb9983c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed3c9800 session 0x5606ea72d2c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122019840 unmapped: 52805632 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 heartbeat osd_stat(store_statfs(0x1b5266000/0x0/0x1bfc00000, data 0x4daf851/0x4e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 heartbeat osd_stat(store_statfs(0x1b5266000/0x0/0x1bfc00000, data 0x4daf851/0x4e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ede8e400 session 0x5606ec66c780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 ms_handle_reset con 0x5606ed3ca400 session 0x5606eab5ef00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122028032 unmapped: 52797440 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122068992 unmapped: 52756480 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 135 heartbeat osd_stat(store_statfs(0x1b5267000/0x0/0x1bfc00000, data 0x4daf7df/0x4e86000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 135 ms_handle_reset con 0x5606ed3ca400 session 0x5606eb9a6000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 135 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecb814a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122060800 unmapped: 52764672 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1676157 data_alloc: 184549376 data_used: 7041024
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122068992 unmapped: 52756480 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122068992 unmapped: 52756480 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 136 ms_handle_reset con 0x5606ed208000 session 0x5606eb99eb40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122118144 unmapped: 52707328 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122118144 unmapped: 52707328 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122118144 unmapped: 52707328 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 137 heartbeat osd_stat(store_statfs(0x1b525b000/0x0/0x1bfc00000, data 0x4db6313/0x4e92000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1686299 data_alloc: 184549376 data_used: 7069696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122134528 unmapped: 52690944 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.547441483s of 10.295126915s, submitted: 253
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 ms_handle_reset con 0x5606ed3c9800 session 0x5606eb99e960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 ms_handle_reset con 0x5606ede8e400 session 0x5606ecde9a40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122134528 unmapped: 52690944 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 ms_handle_reset con 0x5606ede8e400 session 0x5606eb99f860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122167296 unmapped: 52658176 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 139 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecde8780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122175488 unmapped: 52649984 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 140 handle_osd_map epochs [139,140], i have 140, src has [1,140]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 140 ms_handle_reset con 0x5606ed208000 session 0x5606ecb7fc20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122191872 unmapped: 52633600 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1695986 data_alloc: 184549376 data_used: 7069696
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b524e000/0x0/0x1bfc00000, data 0x4dbcca1/0x4e9e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b524f000/0x0/0x1bfc00000, data 0x4dbcc91/0x4e9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122191872 unmapped: 52633600 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122191872 unmapped: 52633600 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122208256 unmapped: 52617216 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 141 ms_handle_reset con 0x5606ed3c9800 session 0x5606eb999e00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122208256 unmapped: 52617216 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 141 ms_handle_reset con 0x5606ed3ca400 session 0x5606ec604000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122224640 unmapped: 52600832 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1701254 data_alloc: 184549376 data_used: 7086080
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 141 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec6045a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122232832 unmapped: 52592640 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.280633926s of 10.636819839s, submitted: 115
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 142 heartbeat osd_stat(store_statfs(0x1b524c000/0x0/0x1bfc00000, data 0x4dbf077/0x4ea2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122281984 unmapped: 52543488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 142 heartbeat osd_stat(store_statfs(0x1b524c000/0x0/0x1bfc00000, data 0x4dbf077/0x4ea2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122298368 unmapped: 52527104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122314752 unmapped: 52510720 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122322944 unmapped: 52502528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1713715 data_alloc: 184549376 data_used: 7098368
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122331136 unmapped: 52494336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 143 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a8000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 144 heartbeat osd_stat(store_statfs(0x1b5245000/0x0/0x1bfc00000, data 0x4dc379d/0x4ea9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122380288 unmapped: 52445184 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122421248 unmapped: 52404224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b523b000/0x0/0x1bfc00000, data 0x4dc7dd3/0x4eb1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122421248 unmapped: 52404224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 145 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecd46780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 145 ms_handle_reset con 0x5606ede8e400 session 0x5606ec66c1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122404864 unmapped: 52420608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1716589 data_alloc: 184549376 data_used: 7110656
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 145 ms_handle_reset con 0x5606ede91800 session 0x5606eb99e000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122404864 unmapped: 52420608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122404864 unmapped: 52420608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.851850510s of 10.446577072s, submitted: 191
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 122404864 unmapped: 52420608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 ms_handle_reset con 0x5606ede91800 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecde9860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b5237000/0x0/0x1bfc00000, data 0x4dca032/0x4eb6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 130334720 unmapped: 44490752 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 ms_handle_reset con 0x5606ed208000 session 0x5606eca69a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecb803c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 124633088 unmapped: 50192384 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1831994 data_alloc: 184549376 data_used: 7127040
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 124633088 unmapped: 50192384 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 ms_handle_reset con 0x5606ecb1f400 session 0x5606ee872000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 124649472 unmapped: 50176000 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 heartbeat osd_stat(store_statfs(0x1b44c6000/0x0/0x1bfc00000, data 0x5b3740c/0x5c27000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecc912c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125591552 unmapped: 49233920 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 148 ms_handle_reset con 0x5606ecb1f400 session 0x5606ecd89e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 148 ms_handle_reset con 0x5606ed208000 session 0x5606ecc5a780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 148 ms_handle_reset con 0x5606ede8e400 session 0x5606ee87bc20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125665280 unmapped: 49160192 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 148 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125739008 unmapped: 49086464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 149 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecd883c0
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1906371 data_alloc: 184549376 data_used: 7143424
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 125812736 unmapped: 49012736 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 45
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b3ef6000/0x0/0x1bfc00000, data 0x60fe5a2/0x61f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 ms_handle_reset con 0x5606ed3c9800 session 0x5606ecd885a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec6041e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128000000 unmapped: 46825472 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.624125481s of 10.044705391s, submitted: 110
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 ms_handle_reset con 0x5606ed208000 session 0x5606ec66dc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 150 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 ms_handle_reset con 0x5606ede8e400 session 0x5606ecde9a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 ms_handle_reset con 0x5606ed34cc00 session 0x5606eb99e1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 ms_handle_reset con 0x5606ede91800 session 0x5606eb4b12c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127401984 unmapped: 47423488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 ms_handle_reset con 0x5606ed34cc00 session 0x5606ecd88000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec6043c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127410176 unmapped: 47415296 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 153 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127459328 unmapped: 47366144 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 153 handle_osd_map epochs [152,153], i have 153, src has [1,153]
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2015153 data_alloc: 184549376 data_used: 7176192
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 153 ms_handle_reset con 0x5606eb3eb400 session 0x5606eb99e960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127492096 unmapped: 47333376 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b3aef000/0x0/0x1bfc00000, data 0x6105001/0x61ff000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127524864 unmapped: 47300608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a65a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 ms_handle_reset con 0x5606ed208000 session 0x5606eb9a6000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 127541248 unmapped: 47284224 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 156 ms_handle_reset con 0x5606eb3eb400 session 0x5606eab5ef00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128647168 unmapped: 46178304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 156 ms_handle_reset con 0x5606ecb1f400 session 0x5606ee459a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128688128 unmapped: 46137344 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1795814 data_alloc: 184549376 data_used: 7196672
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128745472 unmapped: 46080000 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 157 heartbeat osd_stat(store_statfs(0x1b4e09000/0x0/0x1bfc00000, data 0x4de285e/0x4ee4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 158 ms_handle_reset con 0x5606ed34cc00 session 0x5606ee87a5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128761856 unmapped: 46063616 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.862107277s of 10.031414986s, submitted: 333
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 46
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128794624 unmapped: 46030848 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 159 heartbeat osd_stat(store_statfs(0x1b4e01000/0x0/0x1bfc00000, data 0x4de715c/0x4eec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128819200 unmapped: 46006272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128819200 unmapped: 46006272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1800817 data_alloc: 184549376 data_used: 7217152
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128819200 unmapped: 46006272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 159 heartbeat osd_stat(store_statfs(0x1b4e02000/0x0/0x1bfc00000, data 0x4de714c/0x4eeb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128819200 unmapped: 46006272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128819200 unmapped: 46006272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b4dfe000/0x0/0x1bfc00000, data 0x4de93d2/0x4eef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 ms_handle_reset con 0x5606ede91800 session 0x5606eb9a92c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee87c780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 ms_handle_reset con 0x5606ecb1f400 session 0x5606eb9a50e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 ms_handle_reset con 0x5606ed208000 session 0x5606ecc914a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128851968 unmapped: 45973504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 ms_handle_reset con 0x5606ed34cc00 session 0x5606ecc91680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128425984 unmapped: 46399488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1917045 data_alloc: 184549376 data_used: 7217152
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128425984 unmapped: 46399488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128425984 unmapped: 46399488 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.545784950s of 10.066487312s, submitted: 164
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b40e2000/0x0/0x1bfc00000, data 0x5b024b7/0x5c0c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128229376 unmapped: 46596096 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128229376 unmapped: 46596096 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 161 heartbeat osd_stat(store_statfs(0x1b40dc000/0x0/0x1bfc00000, data 0x5b04881/0x5c11000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 161 ms_handle_reset con 0x5606ed3c9800 session 0x5606eab5e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128229376 unmapped: 46596096 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1923408 data_alloc: 184549376 data_used: 7229440
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 161 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec66da40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128442368 unmapped: 46383104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128000000 unmapped: 46825472 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 heartbeat osd_stat(store_statfs(0x1b40ae000/0x0/0x1bfc00000, data 0x5b30bec/0x5c3f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128548864 unmapped: 46276608 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128647168 unmapped: 46178304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128647168 unmapped: 46178304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1951653 data_alloc: 184549376 data_used: 10035200
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128647168 unmapped: 46178304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128647168 unmapped: 46178304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 heartbeat osd_stat(store_statfs(0x1b40b0000/0x0/0x1bfc00000, data 0x5b30b7a/0x5c3d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 162 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.364459038s of 10.524818420s, submitted: 49
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1954479 data_alloc: 184549376 data_used: 10035200
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 heartbeat osd_stat(store_statfs(0x1b508c000/0x0/0x1bfc00000, data 0x5b32dc8/0x5c41000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 128655360 unmapped: 46170112 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137347072 unmapped: 37478400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136413184 unmapped: 38412288 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2030681 data_alloc: 184549376 data_used: 10424320
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 heartbeat osd_stat(store_statfs(0x1b483e000/0x0/0x1bfc00000, data 0x6380dc8/0x648f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.090763092s of 12.576493263s, submitted: 130
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2032669 data_alloc: 184549376 data_used: 10428416
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 ms_handle_reset con 0x5606ed209000 session 0x5606ec6041e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 heartbeat osd_stat(store_statfs(0x1b481c000/0x0/0x1bfc00000, data 0x63a2dd8/0x64b2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 ms_handle_reset con 0x5606ed209400 session 0x5606ecc5a780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136437760 unmapped: 38387712 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 164 ms_handle_reset con 0x5606ecaed800 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 164 ms_handle_reset con 0x5606ed34f000 session 0x5606eb34c780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136445952 unmapped: 38379520 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136445952 unmapped: 38379520 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 165 ms_handle_reset con 0x5606ede75c00 session 0x5606ee87bc20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136470528 unmapped: 38354944 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b4810000/0x0/0x1bfc00000, data 0x63a791d/0x64bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2044593 data_alloc: 184549376 data_used: 10457088
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b4811000/0x0/0x1bfc00000, data 0x63a752f/0x64bc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 136470528 unmapped: 38354944 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 166 ms_handle_reset con 0x5606ecaed800 session 0x5606ecd89a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 166 ms_handle_reset con 0x5606eb3eb400 session 0x5606eca683c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 144457728 unmapped: 30367744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 ms_handle_reset con 0x5606ed209000 session 0x5606eca20d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 ms_handle_reset con 0x5606ede74400 session 0x5606eca212c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135299072 unmapped: 39526400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 ms_handle_reset con 0x5606e9f44800 session 0x5606ed1670e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b33f3000/0x0/0x1bfc00000, data 0x77b7338/0x78d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 ms_handle_reset con 0x5606ed209400 session 0x5606ecd89860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135389184 unmapped: 39436288 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133939200 unmapped: 40886272 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec670960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 ms_handle_reset con 0x5606ecaed800 session 0x5606ec6710e0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.849534035s of 10.045892715s, submitted: 214
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2073435 data_alloc: 184549376 data_used: 10457088
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 169 ms_handle_reset con 0x5606ed209000 session 0x5606ec6705a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 169 ms_handle_reset con 0x5606ede75c00 session 0x5606ee69f680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135004160 unmapped: 39821312 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 ms_handle_reset con 0x5606e9f44800 session 0x5606ec670d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec671c20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b47eb000/0x0/0x1bfc00000, data 0x63bae77/0x64df000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134733824 unmapped: 40091648 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 ms_handle_reset con 0x5606ecaed800 session 0x5606ec6701e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 heartbeat osd_stat(store_statfs(0x1b47e9000/0x0/0x1bfc00000, data 0x63bd101/0x64e3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134742016 unmapped: 40083456 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 47
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 ms_handle_reset con 0x5606ed209400 session 0x5606ec671860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134897664 unmapped: 39927808 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2098061 data_alloc: 184549376 data_used: 10473472
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 ms_handle_reset con 0x5606e9f44800 session 0x5606ec670b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 174 ms_handle_reset con 0x5606eb3eb400 session 0x5606ec670f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134938624 unmapped: 39886848 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 175 ms_handle_reset con 0x5606ecaed800 session 0x5606eb34c3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 175 ms_handle_reset con 0x5606ede49800 session 0x5606ed1672c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 175 ms_handle_reset con 0x5606eac4fc00 session 0x5606ed167c20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134946816 unmapped: 39878656 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 ms_handle_reset con 0x5606ed3cd000 session 0x5606ed40a1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 handle_osd_map epochs [176,176], i have 176, src has [1,176]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 ms_handle_reset con 0x5606ede75c00 session 0x5606ee69e1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b47dc000/0x0/0x1bfc00000, data 0x63c6cd1/0x64f0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134987776 unmapped: 39837696 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 176 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 177 ms_handle_reset con 0x5606eb3eb400 session 0x5606ee87d860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 177 ms_handle_reset con 0x5606e9f44800 session 0x5606ed40ab40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134897664 unmapped: 39927808 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 handle_osd_map epochs [177,178], i have 178, src has [1,178]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135946240 unmapped: 38879232 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 ms_handle_reset con 0x5606ecaed800 session 0x5606eab5fc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 heartbeat osd_stat(store_statfs(0x1b47cf000/0x0/0x1bfc00000, data 0x63cdcd7/0x64fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2117893 data_alloc: 184549376 data_used: 10498048
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.601421356s of 10.517056465s, submitted: 294
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 179 ms_handle_reset con 0x5606ede49800 session 0x5606ee69fc20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135946240 unmapped: 38879232 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 179 ms_handle_reset con 0x5606ed34f000 session 0x5606eb34d0e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135946240 unmapped: 38879232 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 ms_handle_reset con 0x5606ecb1f400 session 0x5606ecbf5a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 ms_handle_reset con 0x5606ed208000 session 0x5606ecd88780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135970816 unmapped: 38854656 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b47c7000/0x0/0x1bfc00000, data 0x63d238b/0x6506000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 ms_handle_reset con 0x5606eb3eb400 session 0x5606ecc91860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 ms_handle_reset con 0x5606e9f44800 session 0x5606ed166b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 181 ms_handle_reset con 0x5606ecb1f400 session 0x5606ecd88000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134299648 unmapped: 40525824 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 181 ms_handle_reset con 0x5606ed208000 session 0x5606ec66dc20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134299648 unmapped: 40525824 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1931643 data_alloc: 184549376 data_used: 7307264
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 182 ms_handle_reset con 0x5606ed34f000 session 0x5606ec6050e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134242304 unmapped: 40583168 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134242304 unmapped: 40583168 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 182 ms_handle_reset con 0x5606ede49800 session 0x5606eb9983c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 ms_handle_reset con 0x5606ede49800 session 0x5606ecbf4b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b5d80000/0x0/0x1bfc00000, data 0x4e1a9a8/0x4f4d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134193152 unmapped: 40632320 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 ms_handle_reset con 0x5606e9f44800 session 0x5606eb99fe00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b5d7c000/0x0/0x1bfc00000, data 0x4e1cd9c/0x4f51000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 ms_handle_reset con 0x5606ecb1f400 session 0x5606ecd892c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134955008 unmapped: 39870464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134955008 unmapped: 39870464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1940823 data_alloc: 184549376 data_used: 7335936
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 185 handle_osd_map epochs [184,185], i have 185, src has [1,185]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134955008 unmapped: 39870464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.207434654s of 10.960203171s, submitted: 274
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 186 ms_handle_reset con 0x5606ecaec800 session 0x5606ee69ed20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 186 handle_osd_map epochs [185,186], i have 186, src has [1,186]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134971392 unmapped: 39854080 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b5d74000/0x0/0x1bfc00000, data 0x4e237d9/0x4f58000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1946091 data_alloc: 184549376 data_used: 7331840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134979584 unmapped: 39845888 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b5d71000/0x0/0x1bfc00000, data 0x4e25bc8/0x4f5d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b5d6c000/0x0/0x1bfc00000, data 0x4e27e56/0x4f61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1952029 data_alloc: 184549376 data_used: 7348224
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134995968 unmapped: 39829504 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.595245361s of 11.721000671s, submitted: 71
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606ed3ca800 session 0x5606ee87c5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606e9f44800 session 0x5606eca214a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135004160 unmapped: 39821312 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b5d6d000/0x0/0x1bfc00000, data 0x4e27f20/0x4f61000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606ecaec800 session 0x5606ea6494a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135012352 unmapped: 39813120 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1956265 data_alloc: 184549376 data_used: 7348224
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135020544 unmapped: 39804928 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606ecb1f400 session 0x5606eab5f4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135028736 unmapped: 39796736 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606ede49800 session 0x5606ecde94a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606ed3cd400 session 0x5606ec604000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134078464 unmapped: 40747008 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 ms_handle_reset con 0x5606e9f44800 session 0x5606ec1c32c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134152192 unmapped: 40673280 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b507a000/0x0/0x1bfc00000, data 0x5b1af5f/0x5c54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134152192 unmapped: 40673280 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2056741 data_alloc: 184549376 data_used: 7348224
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b507a000/0x0/0x1bfc00000, data 0x5b1b029/0x5c54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 134004736 unmapped: 40820736 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606ecaec800 session 0x5606ecdd8960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133054464 unmapped: 41771008 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606ecb1f400 session 0x5606ee880000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b5075000/0x0/0x1bfc00000, data 0x5b1d3f3/0x5c59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606ede49800 session 0x5606ec604780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133079040 unmapped: 41746432 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606ed203000 session 0x5606eab5e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.122662544s of 10.540409088s, submitted: 91
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606ed203000 session 0x5606ee459860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133087232 unmapped: 41738240 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b5074000/0x0/0x1bfc00000, data 0x5b1d3f3/0x5c59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 ms_handle_reset con 0x5606e9f44800 session 0x5606eab59a40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133087232 unmapped: 41738240 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2065754 data_alloc: 184549376 data_used: 7360512
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 133087232 unmapped: 41738240 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135192576 unmapped: 39632896 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec604d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ecaec800 session 0x5606ecc903c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135217152 unmapped: 39608320 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b3ace000/0x0/0x1bfc00000, data 0x5b1f882/0x5c60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ede49800 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606e9f44800 session 0x5606eca20000
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135249920 unmapped: 39575552 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ecaec800 session 0x5606eb9a5e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ecb1f400 session 0x5606eb99c960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135225344 unmapped: 39600128 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2074865 data_alloc: 184549376 data_used: 7372800
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135225344 unmapped: 39600128 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b3ace000/0x0/0x1bfc00000, data 0x5b1f898/0x5c5f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135233536 unmapped: 39591936 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ed203000 session 0x5606eb9a92c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135241728 unmapped: 39583744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135241728 unmapped: 39583744 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.366175652s of 10.932482719s, submitted: 80
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ed3ca000 session 0x5606ecc5b680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135258112 unmapped: 39567360 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2078969 data_alloc: 184549376 data_used: 7372800
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135258112 unmapped: 39567360 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ed3ca000 session 0x5606ec671e00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135299072 unmapped: 39526400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606e9f44800 session 0x5606ec14cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x5b1f8ef/0x5c5d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 ms_handle_reset con 0x5606ecaec800 session 0x5606ee87c5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135299072 unmapped: 39526400 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 190 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 ms_handle_reset con 0x5606ecb1f400 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b3ac9000/0x0/0x1bfc00000, data 0x5b21fa1/0x5c64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135315456 unmapped: 39510016 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b3ac9000/0x0/0x1bfc00000, data 0x5b21fa1/0x5c64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 ms_handle_reset con 0x5606ed30c400 session 0x5606ec6041e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135348224 unmapped: 39477248 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2090677 data_alloc: 184549376 data_used: 7389184
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 ms_handle_reset con 0x5606ed203000 session 0x5606ee87cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135364608 unmapped: 39460864 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 ms_handle_reset con 0x5606e9f44800 session 0x5606ed40a780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 ms_handle_reset con 0x5606ecaec800 session 0x5606ec6705a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec64b2c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135380992 unmapped: 39444480 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 heartbeat osd_stat(store_statfs(0x1b3ac5000/0x0/0x1bfc00000, data 0x5b2432b/0x5c68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 193 ms_handle_reset con 0x5606ed3ca000 session 0x5606ecb7e000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135389184 unmapped: 39436288 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135421952 unmapped: 39403520 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 195 ms_handle_reset con 0x5606ed3ca000 session 0x5606eb9a5e00
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.398358345s of 10.194826126s, submitted: 251
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135462912 unmapped: 39362560 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2102661 data_alloc: 184549376 data_used: 7413760
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 195 ms_handle_reset con 0x5606e9f44800 session 0x5606eb99cd20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135471104 unmapped: 39354368 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 196 ms_handle_reset con 0x5606ecaec800 session 0x5606eb99dc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135479296 unmapped: 39346176 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b3ab4000/0x0/0x1bfc00000, data 0x5b2f56d/0x5c78000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135495680 unmapped: 39329792 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 197 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 198 ms_handle_reset con 0x5606ecb1f400 session 0x5606eb9a7680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135512064 unmapped: 39313408 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 198 ms_handle_reset con 0x5606ed203000 session 0x5606ee87b2c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 199 ms_handle_reset con 0x5606e9f44800 session 0x5606eca214a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135544832 unmapped: 39280640 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2114289 data_alloc: 184549376 data_used: 7426048
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 199 heartbeat osd_stat(store_statfs(0x1b3aab000/0x0/0x1bfc00000, data 0x5b33cbd/0x5c80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135544832 unmapped: 39280640 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 200 ms_handle_reset con 0x5606ecaec800 session 0x5606eca20000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135561216 unmapped: 39264256 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135569408 unmapped: 39256064 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 201 ms_handle_reset con 0x5606ecb1f400 session 0x5606ea9fbc20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 202 ms_handle_reset con 0x5606ed3ca000 session 0x5606eb99f4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135610368 unmapped: 39215104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 202 ms_handle_reset con 0x5606efa24400 session 0x5606ee87af00
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.481793404s of 10.017744064s, submitted: 188
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135618560 unmapped: 39206912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 202 heartbeat osd_stat(store_statfs(0x1b3a9c000/0x0/0x1bfc00000, data 0x5b3a7e6/0x5c8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2128158 data_alloc: 184549376 data_used: 7426048
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 203 ms_handle_reset con 0x5606efa24400 session 0x5606ecb94000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135667712 unmapped: 39157760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135741440 unmapped: 39084032 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 205 ms_handle_reset con 0x5606e9f44800 session 0x5606ee87a3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135782400 unmapped: 39043072 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135815168 unmapped: 39010304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135815168 unmapped: 39010304 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2144665 data_alloc: 184549376 data_used: 7450624
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b3a8e000/0x0/0x1bfc00000, data 0x5b4387b/0x5c9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135831552 unmapped: 38993920 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135847936 unmapped: 38977536 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 208 ms_handle_reset con 0x5606ecaec800 session 0x5606ecde9e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 208 handle_osd_map epochs [207,208], i have 208, src has [1,208]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135880704 unmapped: 38944768 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 handle_osd_map epochs [208,209], i have 209, src has [1,209]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec6052c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b3a7f000/0x0/0x1bfc00000, data 0x5b4a53a/0x5caa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135905280 unmapped: 38920192 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135905280 unmapped: 38920192 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2154478 data_alloc: 184549376 data_used: 7467008
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.848621368s of 10.753606796s, submitted: 296
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135913472 unmapped: 38912000 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 135913472 unmapped: 38912000 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b3a82000/0x0/0x1bfc00000, data 0x5b4a6d1/0x5cab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 209 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137560064 unmapped: 37265408 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 48
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137576448 unmapped: 37249024 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137576448 unmapped: 37249024 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2165178 data_alloc: 184549376 data_used: 7483392
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b3a79000/0x0/0x1bfc00000, data 0x5b4ef86/0x5cb5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 ms_handle_reset con 0x5606ede92c00 session 0x5606ed1661e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b3a76000/0x0/0x1bfc00000, data 0x5b4ef99/0x5cb6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137584640 unmapped: 37240832 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b3a76000/0x0/0x1bfc00000, data 0x5b4ef99/0x5cb6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 ms_handle_reset con 0x5606e9f44800 session 0x5606ee69e780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 ms_handle_reset con 0x5606ecaec800 session 0x5606ee69f0e0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137592832 unmapped: 37232640 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137592832 unmapped: 37232640 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137601024 unmapped: 37224448 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137601024 unmapped: 37224448 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2172859 data_alloc: 184549376 data_used: 7495680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553096771s of 10.080410004s, submitted: 413
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 heartbeat osd_stat(store_statfs(0x1b3a74000/0x0/0x1bfc00000, data 0x5b51750/0x5cba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137601024 unmapped: 37224448 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 49
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137609216 unmapped: 37216256 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137658368 unmapped: 37167104 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137666560 unmapped: 37158912 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137682944 unmapped: 37142528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2176305 data_alloc: 184549376 data_used: 7512064
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 heartbeat osd_stat(store_statfs(0x1b3a74000/0x0/0x1bfc00000, data 0x5b53a45/0x5cba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137682944 unmapped: 37142528 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 heartbeat osd_stat(store_statfs(0x1b3a73000/0x0/0x1bfc00000, data 0x5b53b0c/0x5cbb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137691136 unmapped: 37134336 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137715712 unmapped: 37109760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b3a70000/0x0/0x1bfc00000, data 0x5b55d07/0x5cbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137715712 unmapped: 37109760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137715712 unmapped: 37109760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2180956 data_alloc: 184549376 data_used: 7528448
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137715712 unmapped: 37109760 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.942636490s of 11.359837532s, submitted: 138
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137723904 unmapped: 37101568 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137723904 unmapped: 37101568 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b3a6f000/0x0/0x1bfc00000, data 0x5b55d40/0x5cbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137732096 unmapped: 37093376 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137740288 unmapped: 37085184 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2182105 data_alloc: 184549376 data_used: 7528448
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec14c000
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 ms_handle_reset con 0x5606efa24400 session 0x5606ecb7fa40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137797632 unmapped: 37027840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137797632 unmapped: 37027840 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b3a6f000/0x0/0x1bfc00000, data 0x5b55db2/0x5cbf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 ms_handle_reset con 0x5606ecaecc00 session 0x5606ecc90d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b3a6f000/0x0/0x1bfc00000, data 0x5b55db2/0x5cbf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 ms_handle_reset con 0x5606ecaecc00 session 0x5606ec605c20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 ms_handle_reset con 0x5606ecaec800 session 0x5606ed4012c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137854976 unmapped: 36970496 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 215 handle_osd_map epochs [215,215], i have 215, src has [1,215]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 215 ms_handle_reset con 0x5606ecb1f400 session 0x5606ec1c3680
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 215 ms_handle_reset con 0x5606efa24400 session 0x5606ecc5a5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 137895936 unmapped: 36929536 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 ms_handle_reset con 0x5606ede90000 session 0x5606eca20b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 ms_handle_reset con 0x5606e9f44800 session 0x5606eac5f860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 138960896 unmapped: 35864576 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2205629 data_alloc: 184549376 data_used: 7540736
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 ms_handle_reset con 0x5606ede90000 session 0x5606ecb7e960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 ms_handle_reset con 0x5606ecaec800 session 0x5606ea72c3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139010048 unmapped: 35815424 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b3a60000/0x0/0x1bfc00000, data 0x5b5aa91/0x5ccb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.726260185s of 10.212614059s, submitted: 113
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139034624 unmapped: 35790848 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139051008 unmapped: 35774464 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 218 ms_handle_reset con 0x5606ecaecc00 session 0x5606ed40ba40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b3a5d000/0x0/0x1bfc00000, data 0x5b5cf3f/0x5cd0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 218 ms_handle_reset con 0x5606efa24400 session 0x5606ecdea960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139100160 unmapped: 35725312 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 ms_handle_reset con 0x5606e9f44800 session 0x5606eb99e780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 ms_handle_reset con 0x5606ecb1f400 session 0x5606eca68960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139116544 unmapped: 35708928 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b3a58000/0x0/0x1bfc00000, data 0x5b5f43f/0x5cd4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2220770 data_alloc: 184549376 data_used: 7553024
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 ms_handle_reset con 0x5606ecaec800 session 0x5606ee87d680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139116544 unmapped: 35708928 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 ms_handle_reset con 0x5606ecaecc00 session 0x5606ee4594a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 220 ms_handle_reset con 0x5606ede90000 session 0x5606ecdeb4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139149312 unmapped: 35676160 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 221 ms_handle_reset con 0x5606ede90000 session 0x5606ec66de00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139198464 unmapped: 35627008 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b3a54000/0x0/0x1bfc00000, data 0x5b65e04/0x5cd9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,1,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139214848 unmapped: 35610624 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 222 ms_handle_reset con 0x5606e9f44800 session 0x5606eb9a65a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139321344 unmapped: 35504128 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 222 ms_handle_reset con 0x5606ecaec800 session 0x5606ee458780
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2229485 data_alloc: 184549376 data_used: 7573504
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b364e000/0x0/0x1bfc00000, data 0x5b68364/0x5cde000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139345920 unmapped: 35479552 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.187991142s of 10.003891945s, submitted: 275
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139345920 unmapped: 35479552 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 ms_handle_reset con 0x5606ecaecc00 session 0x5606ee69fa40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 ms_handle_reset con 0x5606ecb1f400 session 0x5606ee87b680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139386880 unmapped: 35438592 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 ms_handle_reset con 0x5606e9f44800 session 0x5606ecde9c20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 225 heartbeat osd_stat(store_statfs(0x1b3641000/0x0/0x1bfc00000, data 0x5b6cc0c/0x5ceb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 225 ms_handle_reset con 0x5606ecaec800 session 0x5606ed40be00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139411456 unmapped: 35414016 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 226 ms_handle_reset con 0x5606ecaecc00 session 0x5606ec670960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139419648 unmapped: 35405824 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2253010 data_alloc: 184549376 data_used: 7585792
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 226 heartbeat osd_stat(store_statfs(0x1b3638000/0x0/0x1bfc00000, data 0x5b71325/0x5cf4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139427840 unmapped: 35397632 heap: 174825472 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 226 ms_handle_reset con 0x5606efa24400 session 0x5606ecd89a40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 227 ms_handle_reset con 0x5606ede90000 session 0x5606ecdd9680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147988480 unmapped: 35233792 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149069824 unmapped: 34152448 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 229 ms_handle_reset con 0x5606e9f44800 session 0x5606ec1c30e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 148070400 unmapped: 35151872 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139657216 unmapped: 43565056 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2595026 data_alloc: 184549376 data_used: 7610368
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139526144 unmapped: 43696128 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 230 heartbeat osd_stat(store_statfs(0x1af62b000/0x0/0x1bfc00000, data 0x9b7a12b/0x9d02000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.979291916s of 10.025147438s, submitted: 265
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139526144 unmapped: 43696128 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 ms_handle_reset con 0x5606ecaec800 session 0x5606ecdd9860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139550720 unmapped: 43671552 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 35241984 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139583488 unmapped: 43638784 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3199957 data_alloc: 184549376 data_used: 7614464
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 heartbeat osd_stat(store_statfs(0x1aae2a000/0x0/0x1bfc00000, data 0xe37c366/0xe504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 139599872 unmapped: 43622400 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 ms_handle_reset con 0x5606ecaecc00 session 0x5606ee87c780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 232 heartbeat osd_stat(store_statfs(0x1a8e24000/0x0/0x1bfc00000, data 0x1037e61e/0x10509000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 148029440 unmapped: 35192832 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 140689408 unmapped: 42532864 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 232 ms_handle_reset con 0x5606efa24400 session 0x5606ecb7e1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 232 heartbeat osd_stat(store_statfs(0x1a7e22000/0x0/0x1bfc00000, data 0x1137e754/0x1150b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 232 ms_handle_reset con 0x5606eada4c00 session 0x5606ec66c780
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 34095104 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 233 ms_handle_reset con 0x5606ecaec800 session 0x5606ea9fa1e0
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149151744 unmapped: 34070528 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3658869 data_alloc: 184549376 data_used: 7643136
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 ms_handle_reset con 0x5606e9f44800 session 0x5606eb9a92c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 ms_handle_reset con 0x5606ecaecc00 session 0x5606ecb7ed20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 heartbeat osd_stat(store_statfs(0x1a6e1d000/0x0/0x1bfc00000, data 0x12380ad3/0x12510000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 ms_handle_reset con 0x5606ed3cc800 session 0x5606ec14c5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 141844480 unmapped: 41377792 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.285438538s of 10.107241631s, submitted: 168
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 ms_handle_reset con 0x5606ee42ec00 session 0x5606eb9a9c20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 141852672 unmapped: 41369600 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 235 heartbeat osd_stat(store_statfs(0x1a5617000/0x0/0x1bfc00000, data 0x13b83775/0x13d17000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 235 ms_handle_reset con 0x5606efa24400 session 0x5606ed166f00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150282240 unmapped: 32940032 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 236 ms_handle_reset con 0x5606e9f44800 session 0x5606ed40a960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 141926400 unmapped: 41295872 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 236 ms_handle_reset con 0x5606ecaecc00 session 0x5606ee458b40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 ms_handle_reset con 0x5606ecaec800 session 0x5606eb34da40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 heartbeat osd_stat(store_statfs(0x1a460f000/0x0/0x1bfc00000, data 0x14b879fb/0x14d1c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 ms_handle_reset con 0x5606ed3cc800 session 0x5606ee458d20
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 ms_handle_reset con 0x5606ed3cb400 session 0x5606ea9fa000
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150388736 unmapped: 32833536 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4054827 data_alloc: 184549376 data_used: 7651328
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142032896 unmapped: 41189376 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142041088 unmapped: 41181184 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150470656 unmapped: 32751616 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150478848 unmapped: 32743424 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 238 heartbeat osd_stat(store_statfs(0x1a060a000/0x0/0x1bfc00000, data 0x18b8c0c8/0x18d23000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142123008 unmapped: 41099264 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4556373 data_alloc: 184549376 data_used: 7651328
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155205632 unmapped: 28016640 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.992366791s of 10.119318962s, submitted: 241
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142188544 unmapped: 41033728 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142270464 unmapped: 40951808 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 239 heartbeat osd_stat(store_statfs(0x19a608000/0x0/0x1bfc00000, data 0x1eb8e252/0x1ed25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150773760 unmapped: 32448512 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142540800 unmapped: 40681472 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5270752 data_alloc: 184549376 data_used: 7667712
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 240 ms_handle_reset con 0x5606e9f44800 session 0x5606ec671860
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 142655488 unmapped: 40566784 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151142400 unmapped: 32079872 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 heartbeat osd_stat(store_statfs(0x196dfd000/0x0/0x1bfc00000, data 0x22392c22/0x22530000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151281664 unmapped: 31940608 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151404544 unmapped: 31817728 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 ms_handle_reset con 0x5606ecaec800 session 0x5606ecb7fe00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 heartbeat osd_stat(store_statfs(0x1955fa000/0x0/0x1bfc00000, data 0x23b92d22/0x23d31000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 152518656 unmapped: 30703616 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5723676 data_alloc: 184549376 data_used: 7696384
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 144310272 unmapped: 38912000 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 ms_handle_reset con 0x5606ecaecc00 session 0x5606eac5e3c0
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.906042099s of 10.010189056s, submitted: 186
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 144441344 unmapped: 38780928 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 heartbeat osd_stat(store_statfs(0x190df8000/0x0/0x1bfc00000, data 0x28394fd3/0x28535000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 152961024 unmapped: 30261248 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 ms_handle_reset con 0x5606efa24400 session 0x5606ecc5af00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 ms_handle_reset con 0x5606e9f44800 session 0x5606ecc91e00
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 144728064 unmapped: 38494208 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 153182208 unmapped: 30040064 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 6355496 data_alloc: 184549376 data_used: 7708672
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 144859136 unmapped: 38363136 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 145031168 unmapped: 38191104 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 145031168 unmapped: 38191104 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 heartbeat osd_stat(store_statfs(0x18cdf9000/0x0/0x1bfc00000, data 0x2c39513f/0x2c535000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 146284544 unmapped: 36937728 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 243 handle_osd_map epochs [243,243], i have 243, src has [1,243]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 243 ms_handle_reset con 0x5606ecaec800 session 0x5606ecb7e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 243 ms_handle_reset con 0x5606e9f45800 session 0x5606ecde9e00
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 146382848 unmapped: 36839424 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 6690140 data_alloc: 184549376 data_used: 7720960
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 243 handle_osd_map epochs [243,243], i have 243, src has [1,243]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 146382848 unmapped: 36839424 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 heartbeat osd_stat(store_statfs(0x18bbbb000/0x0/0x1bfc00000, data 0x2d5cf51d/0x2d772000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,2,2])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.136258125s of 10.004679680s, submitted: 197
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 ms_handle_reset con 0x5606ecaecc00 session 0x5606ec1c32c0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147816448 unmapped: 35405824 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 ms_handle_reset con 0x5606ed3cb400 session 0x5606ecdeba40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147865600 unmapped: 35356672 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 50
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 ms_handle_reset con 0x5606ed3cb400 session 0x5606eb99e5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147881984 unmapped: 35340288 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147881984 unmapped: 35340288 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2439725 data_alloc: 184549376 data_used: 7720960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147881984 unmapped: 35340288 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 heartbeat osd_stat(store_statfs(0x1b35f6000/0x0/0x1bfc00000, data 0x5b998bd/0x5d38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147890176 unmapped: 35332096 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 245 ms_handle_reset con 0x5606e9f44800 session 0x5606ecdd85a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147890176 unmapped: 35332096 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 245 ms_handle_reset con 0x5606e9f45800 session 0x5606eac5e780
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147890176 unmapped: 35332096 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147890176 unmapped: 35332096 heap: 183222272 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2451213 data_alloc: 184549376 data_used: 7733248
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147906560 unmapped: 43712512 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b2dee000/0x0/0x1bfc00000, data 0x639bc03/0x6540000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.751035690s of 10.049364090s, submitted: 309
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 246 ms_handle_reset con 0x5606ecaec800 session 0x5606eb999c20
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147914752 unmapped: 43704320 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 246 heartbeat osd_stat(store_statfs(0x1b25ee000/0x0/0x1bfc00000, data 0x6b9bc68/0x6d40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147922944 unmapped: 43696128 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 ms_handle_reset con 0x5606ecaecc00 session 0x5606eb9a1e00
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 ms_handle_reset con 0x5606ecaecc00 session 0x5606ee87d4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147939328 unmapped: 43679744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 ms_handle_reset con 0x5606e9f44800 session 0x5606ee87c5a0
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147939328 unmapped: 43679744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2459531 data_alloc: 184549376 data_used: 7745536
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b35e7000/0x0/0x1bfc00000, data 0x5ba035b/0x5d45000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 ms_handle_reset con 0x5606e9f45800 session 0x5606ed408960
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147939328 unmapped: 43679744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147939328 unmapped: 43679744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b35e9000/0x0/0x1bfc00000, data 0x5ba04c3/0x5d45000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2461097 data_alloc: 184549376 data_used: 7745536
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b35e7000/0x0/0x1bfc00000, data 0x5ba0655/0x5d46000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b35e7000/0x0/0x1bfc00000, data 0x5ba0655/0x5d46000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.754714966s of 10.116381645s, submitted: 105
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 249 ms_handle_reset con 0x5606ecaec800 session 0x5606ecd89680
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 249 ms_handle_reset con 0x5606ed3cb400 session 0x5606ec6a03c0
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147955712 unmapped: 43663360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2469679 data_alloc: 184549376 data_used: 7770112
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 249 ms_handle_reset con 0x5606e9f44800 session 0x5606ecd89860
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147963904 unmapped: 43655168 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147963904 unmapped: 43655168 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 250 heartbeat osd_stat(store_statfs(0x1b35dc000/0x0/0x1bfc00000, data 0x5ba702e/0x5d51000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147972096 unmapped: 43646976 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147972096 unmapped: 43646976 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 43638784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2473787 data_alloc: 184549376 data_used: 7770112
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 43638784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.13] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 251 heartbeat osd_stat(store_statfs(0x1b35db000/0x0/0x1bfc00000, data 0x5ba71f8/0x5d52000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.855350494s of 10.008944511s, submitted: 70
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 43638784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 43638784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147980288 unmapped: 43638784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 251 heartbeat osd_stat(store_statfs(0x1b35d5000/0x0/0x1bfc00000, data 0x5ba957c/0x5d58000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147988480 unmapped: 43630592 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2478633 data_alloc: 184549376 data_used: 7782400
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147996672 unmapped: 43622400 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 147996672 unmapped: 43622400 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b31d4000/0x0/0x1bfc00000, data 0x5baba47/0x5d59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2482581 data_alloc: 184549376 data_used: 7794688
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b31d4000/0x0/0x1bfc00000, data 0x5babaac/0x5d59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.772136688s of 10.090875626s, submitted: 110
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b31d4000/0x0/0x1bfc00000, data 0x5babaac/0x5d59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b31d5000/0x0/0x1bfc00000, data 0x5babb40/0x5d59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2481043 data_alloc: 184549376 data_used: 7794688
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149045248 unmapped: 42573824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b31d5000/0x0/0x1bfc00000, data 0x5babc6f/0x5d59000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2492263 data_alloc: 184549376 data_used: 7806976
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b31c9000/0x0/0x1bfc00000, data 0x5bb03c4/0x5d63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b31c9000/0x0/0x1bfc00000, data 0x5bb03c4/0x5d63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.899334908s of 10.008271217s, submitted: 57
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b31c9000/0x0/0x1bfc00000, data 0x5bb0429/0x5d63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2489861 data_alloc: 184549376 data_used: 7806976
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b31cc000/0x0/0x1bfc00000, data 0x5bb0458/0x5d62000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 255 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2494389 data_alloc: 184549376 data_used: 7819264
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b31c7000/0x0/0x1bfc00000, data 0x5bb2931/0x5d66000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.901062965s of 10.009350777s, submitted: 30
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149053440 unmapped: 42565632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149061632 unmapped: 42557440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 heartbeat osd_stat(store_statfs(0x1b31c3000/0x0/0x1bfc00000, data 0x5bb4e2f/0x5d6a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149061632 unmapped: 42557440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2500247 data_alloc: 184549376 data_used: 7831552
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149061632 unmapped: 42557440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 heartbeat osd_stat(store_statfs(0x1b31c2000/0x0/0x1bfc00000, data 0x5bb4eca/0x5d6b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2500077 data_alloc: 184549376 data_used: 7831552
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149102592 unmapped: 42516480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 heartbeat osd_stat(store_statfs(0x1b31c0000/0x0/0x1bfc00000, data 0x5bb510c/0x5d6d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.849387169s of 10.004693985s, submitted: 59
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149110784 unmapped: 42508288 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149110784 unmapped: 42508288 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31bb000/0x0/0x1bfc00000, data 0x5bb73bf/0x5d71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149110784 unmapped: 42508288 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2506695 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149110784 unmapped: 42508288 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31c0000/0x0/0x1bfc00000, data 0x5bb7444/0x5d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2503451 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149127168 unmapped: 42491904 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149135360 unmapped: 42483712 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.513706207s of 12.609534264s, submitted: 22
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2505011 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31bf000/0x0/0x1bfc00000, data 0x5bb750b/0x5d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31bf000/0x0/0x1bfc00000, data 0x5bb7570/0x5d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31c0000/0x0/0x1bfc00000, data 0x5bb75d8/0x5d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2504353 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31c0000/0x0/0x1bfc00000, data 0x5bb75d8/0x5d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149143552 unmapped: 42475520 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31c0000/0x0/0x1bfc00000, data 0x5bb75d8/0x5d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149159936 unmapped: 42459136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149159936 unmapped: 42459136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2506217 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149159936 unmapped: 42459136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.445040703s of 11.490411758s, submitted: 10
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149168128 unmapped: 42450944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149168128 unmapped: 42450944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31bd000/0x0/0x1bfc00000, data 0x5bb796f/0x5d71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149184512 unmapped: 42434560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2510783 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31bb000/0x0/0x1bfc00000, data 0x5bb79d7/0x5d71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b31ba000/0x0/0x1bfc00000, data 0x5bb7af3/0x5d72000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 149192704 unmapped: 42426368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2518329 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 150413312 unmapped: 41205760 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b3199000/0x0/0x1bfc00000, data 0x5bd931c/0x5d93000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151052288 unmapped: 40566784 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.366498947s of 10.633449554s, submitted: 59
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151404544 unmapped: 40214528 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b3127000/0x0/0x1bfc00000, data 0x5c4c7fc/0x5e07000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151519232 unmapped: 40099840 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 151764992 unmapped: 39854080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2527941 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 153174016 unmapped: 38445056 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b30bd000/0x0/0x1bfc00000, data 0x5cb4f1e/0x5e71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 153264128 unmapped: 38354944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 153264128 unmapped: 38354944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 152690688 unmapped: 38928384 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b3022000/0x0/0x1bfc00000, data 0x5d50889/0x5f0c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 152788992 unmapped: 38830080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2550429 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 153845760 unmapped: 37773312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154066944 unmapped: 37552128 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.441176414s of 10.002470016s, submitted: 125
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154083328 unmapped: 37535744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2f73000/0x0/0x1bfc00000, data 0x5e010a5/0x5fbb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154238976 unmapped: 37380096 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155557888 unmapped: 36061184 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2555479 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155557888 unmapped: 36061184 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2f05000/0x0/0x1bfc00000, data 0x5e6e768/0x6028000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155557888 unmapped: 36061184 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2f05000/0x0/0x1bfc00000, data 0x5e6e768/0x6028000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154951680 unmapped: 36667392 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154959872 unmapped: 36659200 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 154976256 unmapped: 36642816 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2566483 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155254784 unmapped: 36364288 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2e27000/0x0/0x1bfc00000, data 0x5f4c868/0x6106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155410432 unmapped: 36208640 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.463745117s of 10.003401756s, submitted: 106
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156672000 unmapped: 34947072 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155828224 unmapped: 35790848 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155926528 unmapped: 35692544 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2569565 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 155926528 unmapped: 35692544 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156393472 unmapped: 35225600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2d9c000/0x0/0x1bfc00000, data 0x5fd94be/0x6192000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156557312 unmapped: 35061760 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2d8f000/0x0/0x1bfc00000, data 0x5fe6d64/0x619f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156557312 unmapped: 35061760 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156876800 unmapped: 34742272 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2579413 data_alloc: 184549376 data_used: 7843840
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 157941760 unmapped: 33677312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 157958144 unmapped: 33660928 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.724323273s of 10.007202148s, submitted: 62
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b2d28000/0x0/0x1bfc00000, data 0x604d491/0x6206000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 257 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156835840 unmapped: 34783232 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156958720 unmapped: 34660352 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156958720 unmapped: 34660352 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 258 heartbeat osd_stat(store_statfs(0x1b2cd2000/0x0/0x1bfc00000, data 0x60a172a/0x625b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2588523 data_alloc: 184549376 data_used: 7856128
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 157138944 unmapped: 34480128 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 156942336 unmapped: 34676736 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b2c8f000/0x0/0x1bfc00000, data 0x60e404e/0x629e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 157138944 unmapped: 34480128 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 157376512 unmapped: 34242560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 158588928 unmapped: 33030144 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2600045 data_alloc: 184549376 data_used: 7868416
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 158588928 unmapped: 33030144 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b2c1f000/0x0/0x1bfc00000, data 0x6153f12/0x630f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 158973952 unmapped: 32645120 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.611241341s of 10.004037857s, submitted: 110
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 260 heartbeat osd_stat(store_statfs(0x1b2c0d000/0x0/0x1bfc00000, data 0x61668af/0x6321000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 158998528 unmapped: 32620544 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 159178752 unmapped: 32440320 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 260 heartbeat osd_stat(store_statfs(0x1b2bd2000/0x0/0x1bfc00000, data 0x619e461/0x635b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 159539200 unmapped: 32079872 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2608209 data_alloc: 184549376 data_used: 7880704
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 159686656 unmapped: 31932416 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 159809536 unmapped: 31809536 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 260 heartbeat osd_stat(store_statfs(0x1b2b95000/0x0/0x1bfc00000, data 0x61dcd7a/0x6399000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 260 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 261 heartbeat osd_stat(store_statfs(0x1b2b52000/0x0/0x1bfc00000, data 0x621dc28/0x63db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160874496 unmapped: 30744576 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 261 heartbeat osd_stat(store_statfs(0x1b2b45000/0x0/0x1bfc00000, data 0x622a3ec/0x63e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160882688 unmapped: 30736384 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160882688 unmapped: 30736384 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614557 data_alloc: 184549376 data_used: 7892992
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161136640 unmapped: 30482432 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161136640 unmapped: 30482432 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.552748680s of 10.006714821s, submitted: 132
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161374208 unmapped: 30244864 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161087488 unmapped: 30531584 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b2abb000/0x0/0x1bfc00000, data 0x62b0517/0x6472000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160153600 unmapped: 31465472 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2628417 data_alloc: 184549376 data_used: 7905280
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160153600 unmapped: 31465472 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160350208 unmapped: 31268864 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b2a9e000/0x0/0x1bfc00000, data 0x62cf54c/0x6490000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160366592 unmapped: 31252480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160489472 unmapped: 31129600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 263 heartbeat osd_stat(store_statfs(0x1b2a4d000/0x0/0x1bfc00000, data 0x631d943/0x64e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160497664 unmapped: 31121408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2633687 data_alloc: 184549376 data_used: 7917568
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160505856 unmapped: 31113216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 160505856 unmapped: 31113216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.971612930s of 10.280959129s, submitted: 99
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 264 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161513472 unmapped: 30105600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161513472 unmapped: 30105600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161513472 unmapped: 30105600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2646789 data_alloc: 184549376 data_used: 7929856
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 264 heartbeat osd_stat(store_statfs(0x1b1825000/0x0/0x1bfc00000, data 0x63a58a1/0x6569000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162856960 unmapped: 28762112 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161939456 unmapped: 29679616 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 161955840 unmapped: 29663232 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162357248 unmapped: 29261824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162357248 unmapped: 29261824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 265 heartbeat osd_stat(store_statfs(0x1b174c000/0x0/0x1bfc00000, data 0x647afb5/0x6641000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2666521 data_alloc: 184549376 data_used: 7942144
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162488320 unmapped: 29130752 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162791424 unmapped: 28827648 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.376541138s of 10.001126289s, submitted: 149
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162832384 unmapped: 28786688 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162832384 unmapped: 28786688 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 51
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162783232 unmapped: 28835840 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2668869 data_alloc: 184549376 data_used: 7954432
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 266 heartbeat osd_stat(store_statfs(0x1b16c8000/0x0/0x1bfc00000, data 0x64fe4f0/0x66c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162799616 unmapped: 28819456 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163110912 unmapped: 28508160 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 162742272 unmapped: 28876800 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1661000/0x0/0x1bfc00000, data 0x6562fcc/0x672d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1661000/0x0/0x1bfc00000, data 0x6562fcc/0x672d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163004416 unmapped: 28614656 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163119104 unmapped: 28499968 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b162c000/0x0/0x1bfc00000, data 0x659780f/0x6762000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2684147 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163135488 unmapped: 28483584 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163250176 unmapped: 28368896 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163250176 unmapped: 28368896 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1616000/0x0/0x1bfc00000, data 0x65aded5/0x6778000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1616000/0x0/0x1bfc00000, data 0x65aded5/0x6778000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163323904 unmapped: 28295168 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1616000/0x0/0x1bfc00000, data 0x65aded5/0x6778000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163323904 unmapped: 28295168 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1616000/0x0/0x1bfc00000, data 0x65aded5/0x6778000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.246311188s of 13.480948448s, submitted: 60
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2688783 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163291136 unmapped: 28327936 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163241984 unmapped: 28377088 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163241984 unmapped: 28377088 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b15a6000/0x0/0x1bfc00000, data 0x661dc7a/0x67e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163241984 unmapped: 28377088 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163577856 unmapped: 28041216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2698131 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1546000/0x0/0x1bfc00000, data 0x667c1a8/0x6848000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163577856 unmapped: 28041216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163586048 unmapped: 28033024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163921920 unmapped: 27697152 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1518000/0x0/0x1bfc00000, data 0x66aaaf5/0x6876000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163921920 unmapped: 27697152 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163921920 unmapped: 27697152 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2694747 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.098254204s of 10.315689087s, submitted: 42
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 163930112 unmapped: 27688960 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 165126144 unmapped: 26492928 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 165126144 unmapped: 26492928 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b14a5000/0x0/0x1bfc00000, data 0x671bde5/0x68e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 165281792 unmapped: 26337280 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 165208064 unmapped: 26411008 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b147d000/0x0/0x1bfc00000, data 0x6743341/0x6911000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2708731 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 165347328 unmapped: 26271744 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164167680 unmapped: 27451392 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164167680 unmapped: 27451392 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b143e000/0x0/0x1bfc00000, data 0x6782711/0x6950000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164257792 unmapped: 27361280 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164257792 unmapped: 27361280 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2711211 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164257792 unmapped: 27361280 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b1424000/0x0/0x1bfc00000, data 0x679cf68/0x696a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.708532333s of 10.902816772s, submitted: 40
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164347904 unmapped: 27271168 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164462592 unmapped: 27156480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 164462592 unmapped: 27156480 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166584320 unmapped: 25034752 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b13cb000/0x0/0x1bfc00000, data 0x67f6014/0x69c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7e6f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2725031 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166895616 unmapped: 24723456 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167026688 unmapped: 24592384 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167026688 unmapped: 24592384 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167157760 unmapped: 24461312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b016b000/0x0/0x1bfc00000, data 0x68b56c5/0x6a83000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167157760 unmapped: 24461312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2726239 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167157760 unmapped: 24461312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.734521866s of 10.002904892s, submitted: 54
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167362560 unmapped: 24256512 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167362560 unmapped: 24256512 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167362560 unmapped: 24256512 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167477248 unmapped: 24141824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b00d6000/0x0/0x1bfc00000, data 0x694e2f8/0x6b18000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2733187 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167477248 unmapped: 24141824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167477248 unmapped: 24141824 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166133760 unmapped: 25485312 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166273024 unmapped: 25346048 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166273024 unmapped: 25346048 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2733457 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166273024 unmapped: 25346048 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b007f000/0x0/0x1bfc00000, data 0x69a5625/0x6b6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.835225105s of 10.002253532s, submitted: 34
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166469632 unmapped: 25149440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166592512 unmapped: 25026560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166699008 unmapped: 24920064 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166699008 unmapped: 24920064 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2736871 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 166699008 unmapped: 24920064 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b002a000/0x0/0x1bfc00000, data 0x69fb054/0x6bc4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167747584 unmapped: 23871488 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1afff8000/0x0/0x1bfc00000, data 0x6a2c0de/0x6bf6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167739392 unmapped: 23879680 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167829504 unmapped: 23789568 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167829504 unmapped: 23789568 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2745635 data_alloc: 184549376 data_used: 7966720
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 167829504 unmapped: 23789568 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 heartbeat osd_stat(store_statfs(0x1afba7000/0x0/0x1bfc00000, data 0x6a7eb27/0x6c47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.038572311s of 10.004024506s, submitted: 40
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169000960 unmapped: 22618112 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 52
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169320448 unmapped: 22298624 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169320448 unmapped: 22298624 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 268 heartbeat osd_stat(store_statfs(0x1afb73000/0x0/0x1bfc00000, data 0x6ab0749/0x6c7a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 268 heartbeat osd_stat(store_statfs(0x1afb73000/0x0/0x1bfc00000, data 0x6ab0749/0x6c7a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169320448 unmapped: 22298624 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2752111 data_alloc: 184549376 data_used: 7979008
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 268 heartbeat osd_stat(store_statfs(0x1afb51000/0x0/0x1bfc00000, data 0x6ad3957/0x6c9d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 heartbeat osd_stat(store_statfs(0x1afb2f000/0x0/0x1bfc00000, data 0x6af1519/0x6cbe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2758945 data_alloc: 184549376 data_used: 7991296
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169459712 unmapped: 22159360 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.845072746s of 10.001686096s, submitted: 66
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169639936 unmapped: 21979136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169639936 unmapped: 21979136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 heartbeat osd_stat(store_statfs(0x1afad0000/0x0/0x1bfc00000, data 0x6b5078a/0x6d1e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169639936 unmapped: 21979136 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169754624 unmapped: 21864448 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2763813 data_alloc: 184549376 data_used: 7991296
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169754624 unmapped: 21864448 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 heartbeat osd_stat(store_statfs(0x1afab8000/0x0/0x1bfc00000, data 0x6b68e36/0x6d36000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169836544 unmapped: 21782528 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 169508864 unmapped: 22110208 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170557440 unmapped: 21061632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 heartbeat osd_stat(store_statfs(0x1afa71000/0x0/0x1bfc00000, data 0x6baf8a2/0x6d7d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170557440 unmapped: 21061632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2771861 data_alloc: 184549376 data_used: 7991296
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170557440 unmapped: 21061632 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.256787300s of 10.406856537s, submitted: 28
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 270 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 270 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170565632 unmapped: 21053440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170565632 unmapped: 21053440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 270 heartbeat osd_stat(store_statfs(0x1afa48000/0x0/0x1bfc00000, data 0x6bd665e/0x6da5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170565632 unmapped: 21053440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170565632 unmapped: 21053440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2775565 data_alloc: 184549376 data_used: 8003584
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170565632 unmapped: 21053440 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170582016 unmapped: 21037056 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170582016 unmapped: 21037056 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170582016 unmapped: 21037056 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 heartbeat osd_stat(store_statfs(0x1afa45000/0x0/0x1bfc00000, data 0x6bd89a5/0x6da8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170590208 unmapped: 21028864 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2775701 data_alloc: 184549376 data_used: 8015872
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 heartbeat osd_stat(store_statfs(0x1afa45000/0x0/0x1bfc00000, data 0x6bd89a5/0x6da8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170590208 unmapped: 21028864 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170590208 unmapped: 21028864 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.451031685s of 10.590634346s, submitted: 62
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170811392 unmapped: 20807680 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170811392 unmapped: 20807680 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 53
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170672128 unmapped: 20946944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2777299 data_alloc: 184549376 data_used: 8015872
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 heartbeat osd_stat(store_statfs(0x1afa45000/0x0/0x1bfc00000, data 0x6bd8b6f/0x6da9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170672128 unmapped: 20946944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170672128 unmapped: 20946944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170672128 unmapped: 20946944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 272 heartbeat osd_stat(store_statfs(0x1afa41000/0x0/0x1bfc00000, data 0x6bdaf9f/0x6dac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170672128 unmapped: 20946944 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 23K writes, 89K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 23K writes, 7983 syncs, 2.94 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 33.83 MB, 0.06 MB/s#012Interval WAL: 13K writes, 5576 syncs, 2.49 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170680320 unmapped: 20938752 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2780811 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170680320 unmapped: 20938752 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 272 heartbeat osd_stat(store_statfs(0x1afa41000/0x0/0x1bfc00000, data 0x6bdaf9f/0x6dac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.19] failed. State was: not registered w/ OSD
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170688512 unmapped: 20930560 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170696704 unmapped: 20922368 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170704896 unmapped: 20914176 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170713088 unmapped: 20905984 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170721280 unmapped: 20897792 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170729472 unmapped: 20889600 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170737664 unmapped: 20881408 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170745856 unmapped: 20873216 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3d000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2783621 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 170754048 unmapped: 20865024 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 88.882209778s of 89.090179443s, submitted: 289
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 ms_handle_reset con 0x5606ed2e1c00 session 0x5606ed40b4a0
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171073536 unmapped: 20545536 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Got map version 54
Nov 23 05:14:58 localhost ceph-osd[31668]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2037590349,v1:172.18.0.106:6811/2037590349]
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171220992 unmapped: 20398080 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171229184 unmapped: 20389888 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171237376 unmapped: 20381696 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Nov 23 05:14:58 localhost ceph-osd[31668]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Nov 23 05:14:58 localhost ceph-osd[31668]: bluestore.MempoolThread(0x5606e91e9b60) _resize_shards cache_size: 2222052238 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2782741 data_alloc: 184549376 data_used: 8028160
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171237376 unmapped: 20381696 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171237376 unmapped: 20381696 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: osd.1 273 heartbeat osd_stat(store_statfs(0x1afa3e000/0x0/0x1bfc00000, data 0x6bdd20d/0x6db0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x940f9b7), peers [0,2,3,4,5] op hist [])
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171237376 unmapped: 20381696 heap: 191619072 old mem: 2222052238 new mem: 2222052238
Nov 23 05:14:58 localhost ceph-osd[31668]: prioritycache tune_memory target: 3561598361 mapped: 171237376 unmapped: 20381696 heap: 191619072 old mem: 2222052238 new mem: 2222052238
